Jan 22 16:28:26 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 16:28:26 crc restorecon[4699]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:26 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 16:28:27 crc restorecon[4699]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 16:28:27 crc kubenswrapper[4704]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:28:27 crc kubenswrapper[4704]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 16:28:27 crc kubenswrapper[4704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:28:27 crc kubenswrapper[4704]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:28:27 crc kubenswrapper[4704]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 16:28:27 crc kubenswrapper[4704]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.468901 4704 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471827 4704 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471843 4704 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471849 4704 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471854 4704 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471859 4704 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471865 4704 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471869 4704 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471875 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471880 4704 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471885 4704 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471890 4704 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471895 4704 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471899 4704 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471904 4704 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471909 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471914 4704 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471920 4704 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471925 4704 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471930 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471935 4704 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471940 4704 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471945 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471949 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471954 4704 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471958 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471963 4704 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471967 4704 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471994 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.471999 4704 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472004 4704 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472009 4704 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472014 4704 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472019 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472024 4704 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472029 4704 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472033 4704 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472038 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472042 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472047 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472052 4704 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472055 4704 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472062 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472065 4704 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472069 4704 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472072 4704 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472075 4704 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472079 4704 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472082 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472086 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472089 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472092 4704 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472097 4704 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472101 4704 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472104 4704 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472108 4704 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472111 4704 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472114 4704 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472118 4704 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472123 4704 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472127 4704 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472131 4704 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472136 4704 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472140 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472143 4704 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472147 4704 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472150 4704 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472153 4704 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472157 4704 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472160 4704 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472163 4704 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.472167 4704 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472460 4704 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472478 4704 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472488 4704 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472496 4704 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472502 4704 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472508 4704 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472515 4704 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472522 4704 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472528 4704 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472533 4704 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472539 4704 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472544 4704 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472549 4704 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472554 4704 flags.go:64] FLAG: --cgroup-root="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472559 4704 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472564 4704 flags.go:64] FLAG: --client-ca-file="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472569 4704 flags.go:64] FLAG: --cloud-config="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472574 4704 flags.go:64] FLAG: --cloud-provider="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472579 4704 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472586 4704 flags.go:64] FLAG: --cluster-domain="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472592 4704 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472597 4704 flags.go:64] FLAG: --config-dir="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472602 4704 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472609 4704 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472616 4704 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472621 4704 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472627 4704 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472632 4704 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472637 4704 flags.go:64] FLAG: --contention-profiling="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472642 4704 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472647 4704 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472653 4704 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472658 4704 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472664 4704 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472670 4704 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472675 4704 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472680 4704 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472685 4704 flags.go:64] FLAG: --enable-server="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472690 4704 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472696 4704 flags.go:64] FLAG: --event-burst="100" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472700 4704 flags.go:64] FLAG: --event-qps="50" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472704 4704 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472709 4704 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472713 4704 flags.go:64] FLAG: --eviction-hard="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472718 4704 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472722 4704 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472726 4704 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472730 4704 flags.go:64] FLAG: --eviction-soft="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472735 4704 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472739 4704 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472743 4704 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472747 4704 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472751 4704 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472755 4704 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472759 4704 flags.go:64] FLAG: --feature-gates="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472765 4704 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472769 4704 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472774 4704 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472779 4704 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472783 4704 flags.go:64] FLAG: --healthz-port="10248" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472788 4704 flags.go:64] FLAG: --help="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472805 4704 flags.go:64] FLAG: --hostname-override="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472810 4704 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472815 4704 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472820 4704 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472827 4704 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472838 4704 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472843 4704 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472849 4704 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472854 4704 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472860 4704 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472865 4704 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472871 4704 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472878 4704 flags.go:64] FLAG: --kube-reserved="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472883 4704 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472889 4704 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472894 4704 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472898 4704 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472902 4704 flags.go:64] FLAG: --lock-file="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472907 4704 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472911 4704 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472916 4704 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472923 4704 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472927 4704 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472931 4704 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472935 4704 flags.go:64] FLAG: --logging-format="text" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472939 4704 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472943 4704 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472948 4704 flags.go:64] FLAG: --manifest-url="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472952 4704 flags.go:64] FLAG: --manifest-url-header="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472957 4704 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472962 4704 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472971 4704 flags.go:64] FLAG: --max-pods="110" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472976 4704 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472980 4704 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472984 4704 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472988 4704 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472992 4704 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.472998 4704 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473002 4704 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473013 4704 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473017 4704 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473022 4704 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473026 4704 flags.go:64] FLAG: --pod-cidr="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473030 4704 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473037 4704 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473041 4704 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473045 4704 flags.go:64] FLAG: --pods-per-core="0" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473050 4704 flags.go:64] FLAG: --port="10250" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473054 4704 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473058 4704 flags.go:64] FLAG: --provider-id="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473062 4704 flags.go:64] FLAG: --qos-reserved="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473066 4704 flags.go:64] FLAG: --read-only-port="10255" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473070 4704 flags.go:64] FLAG: --register-node="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473074 4704 flags.go:64] FLAG: --register-schedulable="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473078 4704 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473085 4704 flags.go:64] FLAG: --registry-burst="10" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473089 4704 flags.go:64] FLAG: --registry-qps="5" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473093 4704 flags.go:64] FLAG: --reserved-cpus="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473097 4704 flags.go:64] FLAG: --reserved-memory="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473102 4704 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473107 4704 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473111 4704 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473114 4704 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473120 4704 flags.go:64] FLAG: --runonce="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473124 4704 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473128 4704 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473132 4704 flags.go:64] FLAG: --seccomp-default="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473136 4704 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473140 4704 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473144 4704 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473150 4704 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473154 4704 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473158 4704 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473162 4704 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473166 4704 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473170 4704 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473174 4704 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473179 4704 flags.go:64] FLAG: --system-cgroups="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473183 4704 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473191 4704 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473195 4704 flags.go:64] FLAG: --tls-cert-file="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473199 4704 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473204 4704 flags.go:64] FLAG: --tls-min-version="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473208 4704 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473212 4704 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473216 4704 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473221 4704 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473225 4704 flags.go:64] FLAG: --v="2" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473231 4704 flags.go:64] FLAG: --version="false" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473236 4704 flags.go:64] FLAG: --vmodule="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473241 4704 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473245 4704 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473366 4704 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473378 4704 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473383 4704 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473393 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473398 4704 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473403 4704 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473408 4704 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473412 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473417 4704 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473420 4704 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473426 4704 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473430 4704 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473434 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473437 4704 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473441 4704 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473444 4704 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473448 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473453 4704 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473457 4704 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473462 4704 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473467 4704 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473472 4704 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473477 4704 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473482 4704 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473487 4704 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473493 4704 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473499 4704 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473504 4704 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473511 4704 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473516 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473519 4704 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473524 4704 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473527 4704 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473531 4704 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473535 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473541 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473545 4704 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473548 4704 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473552 4704 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473556 4704 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473560 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473565 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473570 4704 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473575 4704 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473579 4704 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473583 4704 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473588 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473592 4704 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473597 4704 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473601 4704 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473605 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473609 4704 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473613 4704 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473616 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473620 4704 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473623 4704 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473627 4704 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473630 4704 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473634 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473637 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473641 4704 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473644 4704 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473649 4704 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473653 4704 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473658 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473663 4704 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473667 4704 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473674 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473678 4704 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473682 4704 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.473688 4704 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.473703 4704 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.482383 4704 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.482424 4704 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482564 4704 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482576 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482586 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482595 4704 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482605 4704 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482616 4704 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482627 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482636 4704 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482644 4704 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482652 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482661 4704 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482670 4704 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482678 4704 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482686 4704 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482693 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482701 4704 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482709 4704 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482717 4704 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482724 4704 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482732 4704 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482740 4704 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482749 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482756 4704 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482764 4704 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482772 4704 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482780 4704 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482788 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482819 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482826 4704 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482836 4704 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482844 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482852 4704 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482860 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482867 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482877 4704 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482885 4704 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482892 4704 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482903 4704 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482914 4704 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482926 4704 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482936 4704 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482945 4704 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482954 4704 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482961 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482972 4704 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482982 4704 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.482991 4704 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483001 4704 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483009 4704 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483017 4704 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483026 4704 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483034 4704 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483042 4704 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483049 4704 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483057 4704 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483065 4704 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483072 4704 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483080 4704 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483088 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483095 4704 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483103 4704 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483110 4704 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483118 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483126 4704 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483134 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483141 4704 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483148 4704 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483156 4704 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483164 4704 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483172 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483180 4704 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.483194 4704 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483465 4704 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483480 4704 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483490 4704 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483499 4704 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483508 4704 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483516 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483524 4704 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483532 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483539 4704 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483549 4704 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483556 4704 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483565 4704 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483572 4704 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483579 4704 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483587 4704 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483595 4704 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483603 4704 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483613 4704 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483624 4704 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483633 4704 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483641 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483650 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483659 4704 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483667 4704 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483675 4704 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483683 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483690 4704 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483698 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483705 4704 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483713 4704 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483721 4704 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483728 4704 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483736 4704 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483747 4704 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483758 4704 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483768 4704 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483777 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483786 4704 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483817 4704 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483826 4704 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483833 4704 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483842 4704 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483850 4704 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483857 4704 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483865 4704 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483873 4704 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483881 4704 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483889 4704 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483900 4704 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483909 4704 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483917 4704 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483925 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483932 4704 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483940 4704 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483948 4704 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483956 4704 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483964 4704 feature_gate.go:330] unrecognized feature gate: Example Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483972 4704 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.483979 4704 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484005 4704 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484013 4704 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484021 4704 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484028 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484036 4704 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484043 4704 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484051 4704 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484059 4704 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484067 4704 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484074 4704 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484082 4704 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.484091 4704 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.484102 4704 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.484354 4704 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.489163 4704 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.489306 4704 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.490214 4704 server.go:997] "Starting client certificate rotation" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.490253 4704 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.490521 4704 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-02 22:29:33.689111879 +0000 UTC Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.490646 4704 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.496290 4704 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.498817 4704 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.501351 4704 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.510146 4704 log.go:25] "Validated CRI v1 runtime API" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.532931 4704 log.go:25] "Validated CRI v1 image API" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.534628 4704 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.538380 4704 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-16-24-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.538430 4704 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.564577 4704 manager.go:217] Machine: {Timestamp:2026-01-22 16:28:27.561785624 +0000 UTC m=+0.206332394 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2e1f8319-6b24-40fc-94be-3f7f227a5746 BootID:13eee035-d079-4087-986f-982a570291de Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:09:f3:ec Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:09:f3:ec Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1e:9b:e7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:54:4f:28 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:16:06:0a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:44:9a:6c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5e:ab:1d:56:fd:c4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7a:a9:eb:5e:2c:2e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.564964 4704 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.565203 4704 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.565874 4704 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.566126 4704 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.566179 4704 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.566515 4704 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.566530 4704 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.566851 4704 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.566893 4704 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.567323 4704 state_mem.go:36] "Initialized new in-memory state store" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.567460 4704 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.568386 4704 kubelet.go:418] "Attempting to sync node with API server" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.568413 4704 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.568440 4704 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.568457 4704 kubelet.go:324] "Adding apiserver pod source" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.568473 4704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.571897 4704 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.572920 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.573067 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.573046 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.573182 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.573460 4704 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.575202 4704 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576274 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576379 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576465 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576530 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576593 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576644 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576694 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576750 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576855 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.576931 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.577024 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.577083 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.577364 4704 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.578132 4704 server.go:1280] "Started kubelet" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.578506 4704 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.579051 4704 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.579425 4704 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.580333 4704 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.580617 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.580677 4704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 16:28:27 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.580834 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:44:04.089399006 +0000 UTC Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.581024 4704 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.581051 4704 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.581223 4704 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.581405 4704 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.581828 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.581912 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.582529 4704 factory.go:55] Registering systemd factory Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.582561 4704 factory.go:221] Registration of the systemd container factory successfully Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.582852 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="200ms" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.583175 4704 factory.go:153] Registering CRI-O factory Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.583285 4704 factory.go:221] Registration of the crio container factory successfully Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.583459 4704 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.583567 4704 factory.go:103] Registering Raw factory Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.583675 4704 manager.go:1196] Started watching for new ooms in manager Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.584654 4704 manager.go:319] Starting recovery of all containers Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.585308 4704 server.go:460] "Adding debug handlers to kubelet server" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.585262 4704 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d1a72789501d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:28:27.578098132 +0000 UTC m=+0.222644842,LastTimestamp:2026-01-22 16:28:27.578098132 +0000 UTC m=+0.222644842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.589817 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.589960 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590050 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590151 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590254 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590331 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590410 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590516 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590628 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590735 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590842 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.590926 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591007 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591130 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591232 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591317 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591403 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591479 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591565 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591681 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.591768 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592133 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592179 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592201 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592213 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592227 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592254 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592277 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592291 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592305 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592315 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592327 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592341 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592351 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592364 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592376 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592386 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592399 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592410 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592423 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592434 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592445 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592458 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592468 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592482 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592530 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592541 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592554 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592565 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592577 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592586 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592597 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592614 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592630 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592644 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592660 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592674 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592686 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592700 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592710 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592720 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592734 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592743 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592757 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592766 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592776 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592806 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592815 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592825 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592837 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592848 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592861 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592901 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592912 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592924 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592935 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592948 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592959 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592969 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592981 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.592990 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593002 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593014 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593023 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593035 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593044 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593058 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593067 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593080 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593103 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593114 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593126 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593138 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593150 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593164 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593174 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593186 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.593197 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.597473 4704 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598343 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598650 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598690 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598719 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598750 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598780 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598863 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598920 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598953 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.598981 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599012 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599043 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599073 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599123 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599151 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599180 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599208 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599236 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599285 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599310 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599336 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599360 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599386 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599819 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599863 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.599886 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600220 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600300 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600322 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600341 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600362 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600382 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600400 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600656 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600698 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600725 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600752 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.600780 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601052 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601100 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601118 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601135 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601150 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601164 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601180 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601195 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601218 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601233 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601247 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601263 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601278 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601293 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601305 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601317 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601330 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601354 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601440 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601476 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601503 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601527 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601690 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601709 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601721 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601733 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601744 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601755 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601766 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601777 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601847 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601860 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601870 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601881 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601897 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601916 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601927 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601944 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601958 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601969 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601980 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.601992 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602003 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602013 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602026 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602037 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602049 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602059 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602070 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602081 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602099 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602111 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602121 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602132 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602143 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602153 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602164 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602176 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602190 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602201 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602213 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602229 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602241 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602252 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602266 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602277 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602290 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602300 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602310 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602321 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602332 4704 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602342 4704 reconstruct.go:97] "Volume reconstruction finished" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.602350 4704 reconciler.go:26] "Reconciler: start to sync state" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.610634 4704 manager.go:324] Recovery completed Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.618916 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.620708 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.620773 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.620785 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.626298 4704 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.626331 4704 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.626364 4704 state_mem.go:36] "Initialized new in-memory state store" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.630346 4704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.632401 4704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.632455 4704 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.632482 4704 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.632535 4704 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 16:28:27 crc kubenswrapper[4704]: W0122 16:28:27.633297 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.633413 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.637812 4704 policy_none.go:49] "None policy: Start" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.638781 4704 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.638829 4704 state_mem.go:35] "Initializing new in-memory state store" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.681629 4704 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698015 4704 manager.go:334] "Starting Device Plugin manager" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698092 4704 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698109 4704 server.go:79] "Starting device plugin registration server" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698643 4704 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698680 4704 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698850 4704 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698977 4704 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.698991 4704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.708396 4704 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.732976 4704 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.733090 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.734186 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.734230 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.734242 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.734395 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.734832 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.734887 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.735215 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.735230 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.735239 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.735329 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.735522 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.735572 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736218 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736254 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736262 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736372 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736574 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736630 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736890 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736908 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736915 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736959 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736974 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.736983 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.737055 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.737226 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.737263 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.738569 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.738603 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.738615 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.738631 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.738653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.738663 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.740866 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.740902 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.740919 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.740927 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.740939 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.740946 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.741254 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.741296 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.742308 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.742329 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.742339 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.783725 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="400ms" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.798830 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.799677 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.799713 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.799724 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.799750 4704 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:28:27 crc kubenswrapper[4704]: E0122 16:28:27.800166 4704 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.803254 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.803375 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.803483 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.803661 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.803772 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.803905 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.803984 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804048 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804080 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804106 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804140 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804167 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804225 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804264 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.804319 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905407 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905498 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905617 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905630 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905688 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905731 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905750 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905768 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905769 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905861 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905918 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905959 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905955 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.905935 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906009 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906014 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906031 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906028 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906101 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906134 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906142 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906163 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906188 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906195 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906229 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906263 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906229 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906294 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906336 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 16:28:27 crc kubenswrapper[4704]: I0122 16:28:27.906391 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.000667 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.001985 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.002025 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.002057 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.002080 4704 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.002485 4704 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.071368 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.078865 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.094534 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.095866 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bd9bd9fa5c85994d4866b71560ad99a8550fbec47693f641c962d450c64b8eec WatchSource:0}: Error finding container bd9bd9fa5c85994d4866b71560ad99a8550fbec47693f641c962d450c64b8eec: Status 404 returned error can't find the container with id bd9bd9fa5c85994d4866b71560ad99a8550fbec47693f641c962d450c64b8eec Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.098677 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e1ca2f6b5763d073be133020c44d513a60e24907701f256bf86d9e5c658096f0 WatchSource:0}: Error finding container e1ca2f6b5763d073be133020c44d513a60e24907701f256bf86d9e5c658096f0: Status 404 returned error can't find the container with id e1ca2f6b5763d073be133020c44d513a60e24907701f256bf86d9e5c658096f0 Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.108693 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5b5fa6ebae75a2a4ef42b82ed0eee0dd88f44bde7967e2d7d70bda2bf73f72d7 WatchSource:0}: Error finding container 5b5fa6ebae75a2a4ef42b82ed0eee0dd88f44bde7967e2d7d70bda2bf73f72d7: Status 404 returned error can't find the container with id 5b5fa6ebae75a2a4ef42b82ed0eee0dd88f44bde7967e2d7d70bda2bf73f72d7 Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.110164 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.114123 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.133210 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5bf02fdd76aeef29badbd2db806eab307902e3daa4a119ffe23545ce37def573 WatchSource:0}: Error finding container 5bf02fdd76aeef29badbd2db806eab307902e3daa4a119ffe23545ce37def573: Status 404 returned error can't find the container with id 5bf02fdd76aeef29badbd2db806eab307902e3daa4a119ffe23545ce37def573 Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.137998 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5a5603078563279fe2042a0fa86ed905588baf6b6c4176a34aad9273d640abdf WatchSource:0}: Error finding container 5a5603078563279fe2042a0fa86ed905588baf6b6c4176a34aad9273d640abdf: Status 404 returned error can't find the container with id 5a5603078563279fe2042a0fa86ed905588baf6b6c4176a34aad9273d640abdf Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.185273 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="800ms" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.403157 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.404421 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.404463 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.404478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.404501 4704 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.404962 4704 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.579955 4704 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.581951 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:47:33.297054567 +0000 UTC Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.621079 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.621213 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.637588 4704 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236" exitCode=0 Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.637707 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.638006 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.637873 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5b5fa6ebae75a2a4ef42b82ed0eee0dd88f44bde7967e2d7d70bda2bf73f72d7"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.639466 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.639517 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.639531 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.640398 4704 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2" exitCode=0 Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.640463 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.640522 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e1ca2f6b5763d073be133020c44d513a60e24907701f256bf86d9e5c658096f0"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.640629 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.641478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.641525 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.641534 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.642005 4704 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e15a3d60d8c0bd8785edc1f0be943deff729ea2920c38c3af5ae2a9d7fa2089c" exitCode=0 Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.642028 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e15a3d60d8c0bd8785edc1f0be943deff729ea2920c38c3af5ae2a9d7fa2089c"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.642052 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bd9bd9fa5c85994d4866b71560ad99a8550fbec47693f641c962d450c64b8eec"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.642123 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.644382 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.644412 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.644422 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.645382 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.645417 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5a5603078563279fe2042a0fa86ed905588baf6b6c4176a34aad9273d640abdf"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.646843 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb" exitCode=0 Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.646873 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.646893 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5bf02fdd76aeef29badbd2db806eab307902e3daa4a119ffe23545ce37def573"} Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.646969 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.647634 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.647699 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.647715 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.650574 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.651597 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.651632 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:28 crc kubenswrapper[4704]: I0122 16:28:28.651641 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.717634 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.717719 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.832113 4704 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d1a72789501d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:28:27.578098132 +0000 UTC m=+0.222644842,LastTimestamp:2026-01-22 16:28:27.578098132 +0000 UTC m=+0.222644842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:28:28 crc kubenswrapper[4704]: W0122 16:28:28.922902 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.922975 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:28 crc kubenswrapper[4704]: E0122 16:28:28.986028 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="1.6s" Jan 22 16:28:29 crc kubenswrapper[4704]: W0122 16:28:29.162721 4704 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.249:6443: connect: connection refused Jan 22 16:28:29 crc kubenswrapper[4704]: E0122 16:28:29.162819 4704 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.249:6443: connect: connection refused" logger="UnhandledError" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.205967 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.207356 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.207420 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.207432 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.207482 4704 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:28:29 crc kubenswrapper[4704]: E0122 16:28:29.208107 4704 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.249:6443: connect: connection refused" node="crc" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.582441 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:04:02.590750791 +0000 UTC Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.624736 4704 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.650568 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3107659da8eed6f0a85da86064deaeaf0101eea14efd6380f3aa8a2056674f69"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.650615 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.651656 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.651683 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.651692 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.652389 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.652443 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.652454 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.652539 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.653315 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.653367 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.653386 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.654212 4704 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="daf13b22b7b3d972bfd2c6ca5ea0c408aaeffd323b8cff9fe410cbf119c17106" exitCode=0 Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.654269 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"daf13b22b7b3d972bfd2c6ca5ea0c408aaeffd323b8cff9fe410cbf119c17106"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.654348 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.654849 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.654869 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.654877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.657195 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.657227 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.657263 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.657247 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.659654 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.659702 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.659715 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.662532 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.662615 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.662632 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.662644 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2"} Jan 22 16:28:29 crc kubenswrapper[4704]: I0122 16:28:29.662655 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64"} Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.583404 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:58:20.335907711 +0000 UTC Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.605485 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.666986 4704 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="14899c8a6e9a3715ccd543cb80055c965966f95a370dee0b39e950e8ad9a0c41" exitCode=0 Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.667166 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.667822 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"14899c8a6e9a3715ccd543cb80055c965966f95a370dee0b39e950e8ad9a0c41"} Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.667933 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.668273 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.668630 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.669289 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.669320 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.669329 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.671070 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.671099 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.671118 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.671739 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.671813 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.671936 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.673769 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.673837 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.673851 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.808294 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.809391 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.809431 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.809443 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:30 crc kubenswrapper[4704]: I0122 16:28:30.809465 4704 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.584565 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:28:29.049060188 +0000 UTC Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.675371 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c5a1e149e4545cf645cd155ea4398e0410026c536c7be7436b11185cb878b9e0"} Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.675414 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.675442 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5c7947b8159d8727580502306bc4d52a27901fb1d3a8d23b64085a8ae54fc40e"} Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.675461 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6f17e014d6f2fd8dc25b576da29bf68811fa8d706f37267bf93e5b99429d5bf6"} Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.675476 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1e1e5ed0889546fe58f67ad6611e1e227f181a2e24de8989fade176837d5da65"} Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.675491 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"210613ff04c4307588239b7724f5361cfcc18337653369a031b98883c1914adf"} Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.675638 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.676180 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.676206 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.676215 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.676711 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.676757 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:31 crc kubenswrapper[4704]: I0122 16:28:31.676770 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:32 crc kubenswrapper[4704]: I0122 16:28:32.584943 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:49:45.901321013 +0000 UTC Jan 22 16:28:32 crc kubenswrapper[4704]: I0122 16:28:32.779787 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:28:32 crc kubenswrapper[4704]: I0122 16:28:32.779992 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:32 crc kubenswrapper[4704]: I0122 16:28:32.781082 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:32 crc kubenswrapper[4704]: I0122 16:28:32.781123 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:32 crc kubenswrapper[4704]: I0122 16:28:32.781134 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.555283 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.555629 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.555743 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.557446 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.557562 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.557600 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.585656 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:00:20.146672353 +0000 UTC Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.606230 4704 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.606361 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.680813 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.681881 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.681925 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:33 crc kubenswrapper[4704]: I0122 16:28:33.681944 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.144631 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.364775 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.365054 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.366677 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.366783 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.366841 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.586313 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:18:02.016484415 +0000 UTC Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.683411 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.684659 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.684717 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:34 crc kubenswrapper[4704]: I0122 16:28:34.684739 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.016007 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.016201 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.017444 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.017579 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.017673 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.020047 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.586472 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:29:04.8925311 +0000 UTC Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.685387 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.686660 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.686830 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:35 crc kubenswrapper[4704]: I0122 16:28:35.686966 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.270371 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.586714 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:25:08.611571136 +0000 UTC Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.688251 4704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.688320 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.689998 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.690064 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.690081 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:36 crc kubenswrapper[4704]: I0122 16:28:36.925981 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:37 crc kubenswrapper[4704]: I0122 16:28:37.587132 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:40:55.109678084 +0000 UTC Jan 22 16:28:37 crc kubenswrapper[4704]: I0122 16:28:37.691018 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:37 crc kubenswrapper[4704]: I0122 16:28:37.691993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:37 crc kubenswrapper[4704]: I0122 16:28:37.692130 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:37 crc kubenswrapper[4704]: I0122 16:28:37.692206 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:37 crc kubenswrapper[4704]: E0122 16:28:37.708526 4704 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 16:28:38 crc kubenswrapper[4704]: I0122 16:28:38.587997 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:13:04.052921992 +0000 UTC Jan 22 16:28:39 crc kubenswrapper[4704]: I0122 16:28:39.020256 4704 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 16:28:39 crc kubenswrapper[4704]: I0122 16:28:39.020361 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 16:28:39 crc kubenswrapper[4704]: I0122 16:28:39.581296 4704 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 16:28:39 crc kubenswrapper[4704]: I0122 16:28:39.588593 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:01:44.846892853 +0000 UTC Jan 22 16:28:39 crc kubenswrapper[4704]: E0122 16:28:39.625987 4704 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 16:28:40 crc kubenswrapper[4704]: I0122 16:28:40.475276 4704 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 16:28:40 crc kubenswrapper[4704]: I0122 16:28:40.475377 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 16:28:40 crc kubenswrapper[4704]: I0122 16:28:40.480198 4704 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 16:28:40 crc kubenswrapper[4704]: I0122 16:28:40.480270 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 16:28:40 crc kubenswrapper[4704]: I0122 16:28:40.592513 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:09:25.794517566 +0000 UTC Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.411997 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.412197 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.413248 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.413270 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.413278 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.449023 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.593393 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:03:21.683589301 +0000 UTC Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.700549 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.701456 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.701536 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.701548 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:41 crc kubenswrapper[4704]: I0122 16:28:41.713133 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 16:28:42 crc kubenswrapper[4704]: I0122 16:28:42.594453 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:58:00.496729785 +0000 UTC Jan 22 16:28:42 crc kubenswrapper[4704]: I0122 16:28:42.702600 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:42 crc kubenswrapper[4704]: I0122 16:28:42.703593 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:42 crc kubenswrapper[4704]: I0122 16:28:42.703635 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:42 crc kubenswrapper[4704]: I0122 16:28:42.703648 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.563720 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.564036 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.565551 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.565682 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.565780 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.569213 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.595566 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:38:42.531652998 +0000 UTC Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.607027 4704 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.607169 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.704997 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.706344 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.706412 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.706429 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.950042 4704 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 16:28:43 crc kubenswrapper[4704]: I0122 16:28:43.965353 4704 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 16:28:44 crc kubenswrapper[4704]: I0122 16:28:44.285456 4704 csr.go:261] certificate signing request csr-dbrxz is approved, waiting to be issued Jan 22 16:28:44 crc kubenswrapper[4704]: I0122 16:28:44.296946 4704 csr.go:257] certificate signing request csr-dbrxz is issued Jan 22 16:28:44 crc kubenswrapper[4704]: I0122 16:28:44.596298 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:17:28.83400913 +0000 UTC Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.297978 4704 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 16:23:44 +0000 UTC, rotation deadline is 2026-12-11 15:56:33.624987586 +0000 UTC Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.298039 4704 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7751h27m48.326951761s for next certificate rotation Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.478064 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.479596 4704 trace.go:236] Trace[686643024]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:28:31.287) (total time: 14192ms): Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[686643024]: ---"Objects listed" error: 14192ms (16:28:45.479) Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[686643024]: [14.192490284s] [14.192490284s] END Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.479628 4704 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.480135 4704 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.480183 4704 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.480396 4704 trace.go:236] Trace[330431268]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:28:31.597) (total time: 13883ms): Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[330431268]: ---"Objects listed" error: 13883ms (16:28:45.480) Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[330431268]: [13.883323498s] [13.883323498s] END Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.480429 4704 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.480995 4704 trace.go:236] Trace[935793155]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:28:31.608) (total time: 13872ms): Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[935793155]: ---"Objects listed" error: 13872ms (16:28:45.480) Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[935793155]: [13.872130895s] [13.872130895s] END Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.481013 4704 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.482441 4704 trace.go:236] Trace[158310409]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 16:28:30.887) (total time: 14595ms): Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[158310409]: ---"Objects listed" error: 14595ms (16:28:45.482) Jan 22 16:28:45 crc kubenswrapper[4704]: Trace[158310409]: [14.595268665s] [14.595268665s] END Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.482458 4704 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.516968 4704 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39906->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.517035 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39906->192.168.126.11:17697: read: connection reset by peer" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.517387 4704 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.517430 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.588385 4704 apiserver.go:52] "Watching apiserver" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.591850 4704 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.592156 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.592860 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.592939 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.593013 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.593270 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.593627 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.593685 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.593758 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.593810 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.593872 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.594979 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.595323 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.595362 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.596059 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.596065 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.596116 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.596338 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.596401 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:54:50.3918474 +0000 UTC Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.596505 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.596757 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.621363 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.641038 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.652354 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.666938 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.677819 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.682099 4704 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.689526 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.700957 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.711352 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.713981 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.714135 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368" exitCode=255 Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.714184 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368"} Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.726598 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.738343 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.755875 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.765628 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.779809 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782030 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782083 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782114 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782136 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782160 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782183 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782203 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782223 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782246 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782272 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782294 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782314 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782337 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782360 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782379 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782404 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782426 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782449 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782470 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782480 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782492 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782543 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782580 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782619 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782637 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782670 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782666 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782696 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782745 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782764 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782781 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782815 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782832 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782854 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782881 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782870 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782900 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782919 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782938 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782953 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782972 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782988 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783009 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783031 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783049 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783066 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783083 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783099 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.782986 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783026 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783195 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783282 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783430 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784977 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785118 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785137 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785158 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785173 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785189 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785204 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785219 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785236 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785250 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785265 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785281 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785296 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785310 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785327 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785346 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785364 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785393 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785422 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785441 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785473 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785493 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785513 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785531 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785546 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785559 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785573 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785587 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785602 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785617 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785632 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785667 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785683 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785703 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785728 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785743 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785767 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785782 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785828 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785857 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785871 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785907 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785944 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785961 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785996 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786010 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786034 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786051 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786068 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786083 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786106 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786122 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786137 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786155 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786170 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786186 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786202 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786217 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786233 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786250 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786265 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786285 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786301 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783512 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783561 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783665 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783766 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783778 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.783824 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784008 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784051 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784106 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784193 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784270 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784295 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784317 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784374 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784410 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784437 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784465 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786619 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784528 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786658 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784576 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784587 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784590 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784646 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784723 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784772 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784945 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.784952 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.785087 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786835 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786858 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.787259 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.787395 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.787479 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.787589 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.787838 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.788216 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.788262 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.788258 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.788285 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.788773 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.788865 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.788865 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789077 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789304 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789455 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789507 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789315 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789729 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789750 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.789907 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.790043 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.790088 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.790441 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.790636 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.790725 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.791468 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.791894 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.791986 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.792067 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.792080 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.792321 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.792399 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.792612 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.792705 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.786315 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.793328 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.794911 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.794977 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.794989 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.798480 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.798776 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.793096 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.801936 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.803220 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.803860 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.804087 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.804098 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.804255 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.804264 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.804426 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.804590 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.806010 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.806354 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.807027 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.806605 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809245 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809382 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809514 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809619 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809651 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809674 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809703 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809726 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809746 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809768 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809808 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809850 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809871 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809889 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809908 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809933 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809964 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.809984 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810121 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810152 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810193 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810219 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810243 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810268 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810293 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810315 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810334 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810361 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810380 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810398 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810423 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810458 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810480 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810503 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810528 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810549 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810568 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810587 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810608 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810630 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810656 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810676 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810698 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810718 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810737 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810755 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810775 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810811 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810853 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810873 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.810920 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811226 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811253 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811274 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811294 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811321 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811344 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811396 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811427 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811448 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811469 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811489 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811537 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811572 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811593 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811611 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811645 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811666 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811685 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811706 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811739 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811759 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811784 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811829 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811852 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811874 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811899 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811920 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811945 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811965 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.811988 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812007 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812025 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812050 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812070 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812090 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812111 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812132 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812189 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812217 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812243 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812268 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812289 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812313 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812344 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812366 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812398 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812423 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812447 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812471 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812494 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812517 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.812782 4704 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825557 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825631 4704 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825648 4704 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825686 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825702 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825715 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825728 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825749 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825761 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825778 4704 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825810 4704 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825829 4704 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825843 4704 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825855 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825873 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825887 4704 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825900 4704 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825911 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825927 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825940 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825953 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825966 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825985 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825997 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826009 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826025 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826039 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826050 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826062 4704 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826076 4704 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826088 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826099 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826114 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826132 4704 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826144 4704 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826155 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826166 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826181 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826196 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826208 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826225 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826236 4704 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826247 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826258 4704 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826282 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826292 4704 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826304 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826315 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826330 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826342 4704 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826353 4704 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826369 4704 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826380 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826391 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826401 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826415 4704 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826425 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826438 4704 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826449 4704 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826462 4704 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826473 4704 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826484 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826494 4704 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826509 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826520 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826532 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826548 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826560 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826570 4704 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826580 4704 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826599 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826610 4704 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826624 4704 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826636 4704 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826652 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826664 4704 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826676 4704 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826687 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826706 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826715 4704 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826727 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826742 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826756 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826768 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826778 4704 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826809 4704 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826821 4704 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826832 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826844 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826862 4704 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826873 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826884 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826902 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.814651 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.815995 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.817900 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.817996 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.818221 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.818312 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.818392 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.818480 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.818601 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.818873 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.818934 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819082 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819123 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819289 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819312 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819464 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819599 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819693 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.819975 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820117 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820201 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820268 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.827234 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820412 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820452 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820518 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820674 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820748 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.820909 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825034 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825685 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825938 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825959 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.825986 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826005 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826024 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826201 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.826257 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.827702 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.827729 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.827864 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.828723 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.828938 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:28:46.328911144 +0000 UTC m=+18.973457844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.829138 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.829364 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.829940 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.832629 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.832874 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.833375 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.833660 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.833737 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.834161 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.834322 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.814246 4704 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.838283 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.838842 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.839153 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.839603 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840127 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840149 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840108 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840441 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840536 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840686 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840568 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840725 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.840886 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.841033 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.841078 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.841199 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.842737 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.843018 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.843511 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.844912 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.846135 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.846385 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.846419 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.846894 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.848592 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.848679 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:46.348658265 +0000 UTC m=+18.993204965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.848901 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.848977 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.849172 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.849430 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.849522 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.849595 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.849689 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.849733 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:46.349724444 +0000 UTC m=+18.994271144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.850016 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.850047 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.850144 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.850445 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.850533 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.850647 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.851022 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.851275 4704 scope.go:117] "RemoveContainer" containerID="9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.851431 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.852268 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.852967 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.853137 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.853211 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.853547 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.853681 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.853886 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.853974 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.853985 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.854261 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.860521 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.864964 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.873237 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.873278 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.873300 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.873389 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:46.373361232 +0000 UTC m=+19.017907932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.873650 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.873679 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.873928 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.873948 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.873963 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:45 crc kubenswrapper[4704]: E0122 16:28:45.874023 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:46.37400418 +0000 UTC m=+19.018550880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.875118 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.879771 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.885850 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.886453 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.893698 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.923214 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928361 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928414 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928457 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928469 4704 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928478 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928488 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928497 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928505 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928514 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928522 4704 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928531 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928541 4704 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928549 4704 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928557 4704 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928564 4704 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928720 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928731 4704 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928740 4704 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928748 4704 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928757 4704 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928764 4704 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928772 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928781 4704 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928803 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928812 4704 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928820 4704 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928827 4704 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928862 4704 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928881 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928901 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928909 4704 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.928900 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929093 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929107 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929115 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929124 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929133 4704 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929142 4704 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929153 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929164 4704 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929176 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929187 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929197 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929207 4704 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929218 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929229 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929237 4704 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929246 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929255 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929264 4704 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929272 4704 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929280 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929312 4704 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929321 4704 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929330 4704 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929339 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929347 4704 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929355 4704 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929363 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929372 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929384 4704 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929395 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929405 4704 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929415 4704 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929425 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929436 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929446 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929455 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929464 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929477 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929488 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929498 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929509 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929520 4704 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929529 4704 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929539 4704 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929549 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929559 4704 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929569 4704 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929579 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929591 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929601 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929611 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929622 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929632 4704 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929713 4704 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929725 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929734 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929757 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929765 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929773 4704 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929781 4704 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929805 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929816 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929827 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929837 4704 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929856 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929865 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929874 4704 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929881 4704 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929890 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929898 4704 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929905 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: I0122 16:28:45.929913 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 16:28:45 crc kubenswrapper[4704]: W0122 16:28:45.939308 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f5869e108e64b87625760ca85005960f1ec7101ff7b80bed1120edd1ab7615bc WatchSource:0}: Error finding container f5869e108e64b87625760ca85005960f1ec7101ff7b80bed1120edd1ab7615bc: Status 404 returned error can't find the container with id f5869e108e64b87625760ca85005960f1ec7101ff7b80bed1120edd1ab7615bc Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.206532 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.216545 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 16:28:46 crc kubenswrapper[4704]: W0122 16:28:46.217350 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-e866d4555ebce63f57fb260abbd5cb259e7933697afa92cac94c5775ab144961 WatchSource:0}: Error finding container e866d4555ebce63f57fb260abbd5cb259e7933697afa92cac94c5775ab144961: Status 404 returned error can't find the container with id e866d4555ebce63f57fb260abbd5cb259e7933697afa92cac94c5775ab144961 Jan 22 16:28:46 crc kubenswrapper[4704]: W0122 16:28:46.234480 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-e34a8714dffaac2be0fa41a2d30af72527d1f83e56522cd3be4096040884e6ec WatchSource:0}: Error finding container e34a8714dffaac2be0fa41a2d30af72527d1f83e56522cd3be4096040884e6ec: Status 404 returned error can't find the container with id e34a8714dffaac2be0fa41a2d30af72527d1f83e56522cd3be4096040884e6ec Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.324408 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ztlx4"] Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.324697 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.326205 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.326489 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.326903 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.328184 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.338147 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.338366 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:28:47.338325185 +0000 UTC m=+19.982871885 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.344152 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.373874 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.387105 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.403147 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.421899 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.438668 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c93a4369-3f1a-4707-9e55-3968cfef2744-serviceca\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.438703 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpkc\" (UniqueName: \"kubernetes.io/projected/c93a4369-3f1a-4707-9e55-3968cfef2744-kube-api-access-hqpkc\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.438726 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.438752 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c93a4369-3f1a-4707-9e55-3968cfef2744-host\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.438769 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.438786 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.438823 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.438852 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.438913 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.438946 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:47.438927862 +0000 UTC m=+20.083474652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.438969 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:47.438959803 +0000 UTC m=+20.083506503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439002 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439014 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439024 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439051 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:47.439039195 +0000 UTC m=+20.083585895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439077 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439093 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439105 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.439140 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:47.439132208 +0000 UTC m=+20.083678998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.450877 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.479164 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.515303 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.539912 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c93a4369-3f1a-4707-9e55-3968cfef2744-serviceca\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.539962 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqpkc\" (UniqueName: \"kubernetes.io/projected/c93a4369-3f1a-4707-9e55-3968cfef2744-kube-api-access-hqpkc\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.539995 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c93a4369-3f1a-4707-9e55-3968cfef2744-host\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.540157 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c93a4369-3f1a-4707-9e55-3968cfef2744-host\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.541580 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c93a4369-3f1a-4707-9e55-3968cfef2744-serviceca\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.560080 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqpkc\" (UniqueName: \"kubernetes.io/projected/c93a4369-3f1a-4707-9e55-3968cfef2744-kube-api-access-hqpkc\") pod \"node-ca-ztlx4\" (UID: \"c93a4369-3f1a-4707-9e55-3968cfef2744\") " pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.597196 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:13:22.914806287 +0000 UTC Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.633923 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:46 crc kubenswrapper[4704]: E0122 16:28:46.634131 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.647362 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ztlx4" Jan 22 16:28:46 crc kubenswrapper[4704]: W0122 16:28:46.659319 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc93a4369_3f1a_4707_9e55_3968cfef2744.slice/crio-3febbda45e992035f4e8eea691e0cafe21414f80bd59d5e0105db1f5de3332ef WatchSource:0}: Error finding container 3febbda45e992035f4e8eea691e0cafe21414f80bd59d5e0105db1f5de3332ef: Status 404 returned error can't find the container with id 3febbda45e992035f4e8eea691e0cafe21414f80bd59d5e0105db1f5de3332ef Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.718100 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.724684 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.725215 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.729756 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mccb2"] Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.730099 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.733278 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.733493 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.733565 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.734430 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nndw6"] Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.735285 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.736262 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-77bsn"] Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.736558 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738140 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738243 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738378 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hsg8r"] Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738728 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738435 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738466 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738523 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.738577 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.740076 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.740366 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.742860 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.742900 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.742914 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f5869e108e64b87625760ca85005960f1ec7101ff7b80bed1120edd1ab7615bc"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.743454 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.743648 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.743785 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.745686 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ztlx4" event={"ID":"c93a4369-3f1a-4707-9e55-3968cfef2744","Type":"ContainerStarted","Data":"3febbda45e992035f4e8eea691e0cafe21414f80bd59d5e0105db1f5de3332ef"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.746503 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.747087 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e34a8714dffaac2be0fa41a2d30af72527d1f83e56522cd3be4096040884e6ec"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.747213 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.752278 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.752341 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e866d4555ebce63f57fb260abbd5cb259e7933697afa92cac94c5775ab144961"} Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.776966 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.815768 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.834842 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842297 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-system-cni-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842350 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cnibin\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842394 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-cnibin\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842427 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842457 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-k8s-cni-cncf-io\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842477 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-conf-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842511 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-os-release\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842532 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-etc-kubernetes\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842551 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnz9w\" (UniqueName: \"kubernetes.io/projected/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-kube-api-access-fnz9w\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842569 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-system-cni-dir\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842590 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbl92\" (UniqueName: \"kubernetes.io/projected/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-kube-api-access-hbl92\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842610 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-netns\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842666 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-kubelet\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842689 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-multus-certs\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842709 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8z7z\" (UniqueName: \"kubernetes.io/projected/e8e25829-99af-4717-87f3-43a79b9d8c26-kube-api-access-g8z7z\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842728 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cni-binary-copy\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842764 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8e25829-99af-4717-87f3-43a79b9d8c26-rootfs\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842781 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8e25829-99af-4717-87f3-43a79b9d8c26-proxy-tls\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842818 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-cni-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842836 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-cni-bin\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842857 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx556\" (UniqueName: \"kubernetes.io/projected/9bb5fd98-0b3a-4412-a083-80d87ee360f4-kube-api-access-gx556\") pod \"node-resolver-mccb2\" (UID: \"9bb5fd98-0b3a-4412-a083-80d87ee360f4\") " pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842892 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-cni-binary-copy\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842909 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842927 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-cni-multus\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842967 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-os-release\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.842995 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8e25829-99af-4717-87f3-43a79b9d8c26-mcd-auth-proxy-config\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.843035 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-socket-dir-parent\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.843051 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-hostroot\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.843069 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-daemon-config\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.843089 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9bb5fd98-0b3a-4412-a083-80d87ee360f4-hosts-file\") pod \"node-resolver-mccb2\" (UID: \"9bb5fd98-0b3a-4412-a083-80d87ee360f4\") " pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.848850 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.867972 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.885938 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.900141 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.914270 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.927294 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.934666 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944136 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-socket-dir-parent\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944183 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-hostroot\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944203 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-daemon-config\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944241 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9bb5fd98-0b3a-4412-a083-80d87ee360f4-hosts-file\") pod \"node-resolver-mccb2\" (UID: \"9bb5fd98-0b3a-4412-a083-80d87ee360f4\") " pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944262 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-system-cni-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944277 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cnibin\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944293 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-cnibin\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944307 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944323 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-k8s-cni-cncf-io\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944336 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-conf-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944352 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-os-release\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944368 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-etc-kubernetes\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944384 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnz9w\" (UniqueName: \"kubernetes.io/projected/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-kube-api-access-fnz9w\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944403 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-system-cni-dir\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944417 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbl92\" (UniqueName: \"kubernetes.io/projected/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-kube-api-access-hbl92\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944431 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-netns\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944452 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-kubelet\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944466 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-multus-certs\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944482 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8z7z\" (UniqueName: \"kubernetes.io/projected/e8e25829-99af-4717-87f3-43a79b9d8c26-kube-api-access-g8z7z\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944497 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cni-binary-copy\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944514 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8e25829-99af-4717-87f3-43a79b9d8c26-rootfs\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944531 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8e25829-99af-4717-87f3-43a79b9d8c26-proxy-tls\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944547 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-cni-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944568 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-cni-bin\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944584 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx556\" (UniqueName: \"kubernetes.io/projected/9bb5fd98-0b3a-4412-a083-80d87ee360f4-kube-api-access-gx556\") pod \"node-resolver-mccb2\" (UID: \"9bb5fd98-0b3a-4412-a083-80d87ee360f4\") " pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944615 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-cni-binary-copy\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944630 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944646 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-cni-multus\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944661 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-os-release\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.944676 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8e25829-99af-4717-87f3-43a79b9d8c26-mcd-auth-proxy-config\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945261 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8e25829-99af-4717-87f3-43a79b9d8c26-mcd-auth-proxy-config\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945225 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945455 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-netns\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945488 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-kubelet\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945516 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-multus-certs\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945666 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-socket-dir-parent\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945714 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-hostroot\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.945928 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-run-k8s-cni-cncf-io\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946103 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cni-binary-copy\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946147 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8e25829-99af-4717-87f3-43a79b9d8c26-rootfs\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946276 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9bb5fd98-0b3a-4412-a083-80d87ee360f4-hosts-file\") pod \"node-resolver-mccb2\" (UID: \"9bb5fd98-0b3a-4412-a083-80d87ee360f4\") " pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946381 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-daemon-config\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946419 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-system-cni-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946438 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-conf-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946585 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-cnibin\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946635 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cnibin\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946596 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-os-release\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946667 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-etc-kubernetes\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946702 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-system-cni-dir\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946727 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-cni-multus\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946808 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-os-release\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946842 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-host-var-lib-cni-bin\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.946984 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-multus-cni-dir\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.947173 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.947214 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-cni-binary-copy\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.959692 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.960342 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8e25829-99af-4717-87f3-43a79b9d8c26-proxy-tls\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.980598 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnz9w\" (UniqueName: \"kubernetes.io/projected/9357b7a7-d902-4f7e-97b9-b0a7871ec95e-kube-api-access-fnz9w\") pod \"multus-77bsn\" (UID: \"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\") " pod="openshift-multus/multus-77bsn" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.983477 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx556\" (UniqueName: \"kubernetes.io/projected/9bb5fd98-0b3a-4412-a083-80d87ee360f4-kube-api-access-gx556\") pod \"node-resolver-mccb2\" (UID: \"9bb5fd98-0b3a-4412-a083-80d87ee360f4\") " pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.983936 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8z7z\" (UniqueName: \"kubernetes.io/projected/e8e25829-99af-4717-87f3-43a79b9d8c26-kube-api-access-g8z7z\") pod \"machine-config-daemon-hsg8r\" (UID: \"e8e25829-99af-4717-87f3-43a79b9d8c26\") " pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.985139 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbl92\" (UniqueName: \"kubernetes.io/projected/6bea4f83-78aa-49a7-a98a-60045d7f4f0f-kube-api-access-hbl92\") pod \"multus-additional-cni-plugins-nndw6\" (UID: \"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\") " pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:46 crc kubenswrapper[4704]: I0122 16:28:46.990558 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:46Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.001757 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.022593 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.045523 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mccb2" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.052245 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.054333 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nndw6" Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.063987 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bb5fd98_0b3a_4412_a083_80d87ee360f4.slice/crio-f59579b5e4c26d5de8c86ee94074fa7d50d37d97018280dbd0a48c507dea554c WatchSource:0}: Error finding container f59579b5e4c26d5de8c86ee94074fa7d50d37d97018280dbd0a48c507dea554c: Status 404 returned error can't find the container with id f59579b5e4c26d5de8c86ee94074fa7d50d37d97018280dbd0a48c507dea554c Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.075287 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-77bsn" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.081874 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.082397 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.100817 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.116782 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8e25829_99af_4717_87f3_43a79b9d8c26.slice/crio-2af4d8ebb036f6d346b4836e9736aba83063a2e9ebb9bb9c4cc28ff627cc1dda WatchSource:0}: Error finding container 2af4d8ebb036f6d346b4836e9736aba83063a2e9ebb9bb9c4cc28ff627cc1dda: Status 404 returned error can't find the container with id 2af4d8ebb036f6d346b4836e9736aba83063a2e9ebb9bb9c4cc28ff627cc1dda Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.117249 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.122947 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q8h4x"] Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.123836 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.128891 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.128996 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.129117 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.129159 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.129227 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.129248 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.129344 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.135643 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.151836 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.164388 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.181909 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.199349 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.216609 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.229432 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.241610 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.247829 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-netns\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.247866 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-script-lib\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.247894 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-etc-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.247910 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.247928 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-bin\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.247978 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-ovn-kubernetes\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248008 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-slash\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248037 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-var-lib-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248057 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-kubelet\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248097 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-netd\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248135 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-systemd-units\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248155 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-ovn\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248171 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-log-socket\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248190 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248211 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqnk\" (UniqueName: \"kubernetes.io/projected/fce29525-000a-4c91-8765-67c0c3f1ae7e-kube-api-access-hkqnk\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248245 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-node-log\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248264 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovn-node-metrics-cert\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248282 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-systemd\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248295 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-config\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.248311 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-env-overrides\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.261302 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.279288 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.303870 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349115 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349206 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-var-lib-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349225 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-kubelet\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349241 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-netd\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349262 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-systemd-units\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349284 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-ovn\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349298 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-log-socket\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349314 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349334 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkqnk\" (UniqueName: \"kubernetes.io/projected/fce29525-000a-4c91-8765-67c0c3f1ae7e-kube-api-access-hkqnk\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349365 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-node-log\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349385 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovn-node-metrics-cert\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349401 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-systemd\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349417 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-config\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349431 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-env-overrides\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349448 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-netns\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349464 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-script-lib\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349479 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-etc-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349495 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349511 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-bin\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349536 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-ovn-kubernetes\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349551 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-slash\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349602 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-slash\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.349668 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:28:49.349653282 +0000 UTC m=+21.994199982 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349690 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-var-lib-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349710 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-kubelet\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349731 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-netd\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349749 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-systemd-units\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349769 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-ovn\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349808 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-log-socket\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.349829 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.350097 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-node-log\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351019 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-script-lib\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351159 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-systemd\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351350 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351401 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-etc-openvswitch\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351553 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-ovn-kubernetes\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351582 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-bin\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351655 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-netns\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.351856 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-config\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.355249 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-env-overrides\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.356504 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovn-node-metrics-cert\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.369487 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkqnk\" (UniqueName: \"kubernetes.io/projected/fce29525-000a-4c91-8765-67c0c3f1ae7e-kube-api-access-hkqnk\") pod \"ovnkube-node-q8h4x\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.381191 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.427463 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.440558 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.443243 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.450066 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450336 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450419 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:49.450386552 +0000 UTC m=+22.094933252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.450338 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.450513 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.450546 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450643 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450711 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450756 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450772 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450650 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450842 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450858 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450728 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:49.450719381 +0000 UTC m=+22.095266081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450934 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:49.450915806 +0000 UTC m=+22.095462506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.450948 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:49.450941637 +0000 UTC m=+22.095488337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.462902 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfce29525_000a_4c91_8765_67c0c3f1ae7e.slice/crio-da90047f784a6ba3378431364b6575c6f9218b8f136c68799c145c134d49021d WatchSource:0}: Error finding container da90047f784a6ba3378431364b6575c6f9218b8f136c68799c145c134d49021d: Status 404 returned error can't find the container with id da90047f784a6ba3378431364b6575c6f9218b8f136c68799c145c134d49021d Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.469327 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.482928 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.491300 4704 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491500 4704 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491726 4704 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491851 4704 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491883 4704 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491927 4704 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491950 4704 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491977 4704 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.491935 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-nndw6/status\": read tcp 38.129.56.249:53946->38.129.56.249:6443: use of closed network connection" Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.491999 4704 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492022 4704 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492081 4704 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492101 4704 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492136 4704 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492152 4704 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492163 4704 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492186 4704 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492119 4704 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492327 4704 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492350 4704 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492448 4704 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492452 4704 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492329 4704 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: W0122 16:28:47.492582 4704 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.598722 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:14:45.635860288 +0000 UTC Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.632958 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.633009 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.633078 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:47 crc kubenswrapper[4704]: E0122 16:28:47.633148 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.636702 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.637220 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.638479 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.639138 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.640215 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.640854 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.641568 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.642602 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.643248 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.644260 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.644775 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.646022 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.646538 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.647323 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.649333 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.649916 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.650934 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.651335 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.651986 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.654216 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.654923 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.655544 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.656193 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.656712 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.658193 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.659091 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.659883 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.661118 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.661579 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.662830 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.663361 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.665141 4704 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.665249 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.667076 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.668275 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.668785 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.669902 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.670418 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.671141 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.672340 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.673054 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.674475 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.675061 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.676239 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.677182 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.678215 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.678743 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.680067 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.680938 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.682444 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.682577 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.683313 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.684426 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.685232 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.686609 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.687376 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.688005 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.700363 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.712211 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.729918 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.742925 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.758929 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mccb2" event={"ID":"9bb5fd98-0b3a-4412-a083-80d87ee360f4","Type":"ContainerStarted","Data":"e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.758990 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mccb2" event={"ID":"9bb5fd98-0b3a-4412-a083-80d87ee360f4","Type":"ContainerStarted","Data":"f59579b5e4c26d5de8c86ee94074fa7d50d37d97018280dbd0a48c507dea554c"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.759967 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.760945 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerStarted","Data":"4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.760982 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerStarted","Data":"f36a230950fa1aea2c1e5225f31b34d16ec7f66dae12161e17786f775e44576f"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.762601 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ztlx4" event={"ID":"c93a4369-3f1a-4707-9e55-3968cfef2744","Type":"ContainerStarted","Data":"4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.765266 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62" exitCode=0 Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.765324 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.765342 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"da90047f784a6ba3378431364b6575c6f9218b8f136c68799c145c134d49021d"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.767913 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.767974 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.767998 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"2af4d8ebb036f6d346b4836e9736aba83063a2e9ebb9bb9c4cc28ff627cc1dda"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.770469 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerStarted","Data":"cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.770513 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerStarted","Data":"688a1d9182940b1a47682d1ad3d55d4e4d0ea1913a24c5fc5eeb2253f01a8594"} Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.783922 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.808129 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.827982 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.840667 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.857566 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.873076 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.887966 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.903895 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.923766 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.935084 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.946554 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:47 crc kubenswrapper[4704]: I0122 16:28:47.968509 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.008829 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.022253 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.038576 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.058885 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.081119 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.102359 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.131425 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.154257 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.378114 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.404858 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.409223 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.410625 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.435853 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.458259 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.471788 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.473926 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.537707 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.549460 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.549478 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.574124 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.577202 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.598867 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:06:05.698856915 +0000 UTC Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.617211 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.620625 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.633040 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:48 crc kubenswrapper[4704]: E0122 16:28:48.633176 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.660683 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.680823 4704 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.683147 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.683182 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.683193 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.683300 4704 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.691005 4704 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.691433 4704 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.692427 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.692466 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.692479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.692495 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.692507 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.697018 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 16:28:48 crc kubenswrapper[4704]: E0122 16:28:48.712464 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.716292 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.716337 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.716349 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.716366 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.716377 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:48 crc kubenswrapper[4704]: E0122 16:28:48.729561 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.733243 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.733368 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.733484 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.733593 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.733696 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:48 crc kubenswrapper[4704]: E0122 16:28:48.747115 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.752042 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.752075 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.752088 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.752106 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.752122 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:48 crc kubenswrapper[4704]: E0122 16:28:48.767767 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.771662 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.771693 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.771706 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.771722 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.771734 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.774536 4704 generic.go:334] "Generic (PLEG): container finished" podID="6bea4f83-78aa-49a7-a98a-60045d7f4f0f" containerID="cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928" exitCode=0 Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.774603 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerDied","Data":"cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.780674 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.780715 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.780728 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.780739 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} Jan 22 16:28:48 crc kubenswrapper[4704]: E0122 16:28:48.785458 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: E0122 16:28:48.785634 4704 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.787191 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.787212 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.787221 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.787235 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.787246 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.791040 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.807559 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.821646 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.837372 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.861512 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.876878 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.891398 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.891530 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.891594 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.891658 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.891741 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.896787 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.905865 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.919116 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.938819 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.942506 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.958204 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.961886 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.973401 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.989825 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:48Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.993685 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.993714 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.993722 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.993735 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:48 crc kubenswrapper[4704]: I0122 16:28:48.993743 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:48Z","lastTransitionTime":"2026-01-22T16:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.010445 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.013611 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.037920 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.088962 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.095966 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.096006 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.096017 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.096034 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.096046 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.198469 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.198750 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.198899 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.199111 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.199244 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.301367 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.301408 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.301418 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.301434 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.301446 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.368563 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.368757 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:28:53.368730416 +0000 UTC m=+26.013277106 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.403653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.403694 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.403705 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.403720 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.403730 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.470355 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.470397 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.470428 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.470446 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470526 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470585 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470602 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470612 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470541 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470643 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470687 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470613 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:53.470592458 +0000 UTC m=+26.115139168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470701 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470722 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:53.470702871 +0000 UTC m=+26.115249581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470741 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:53.470733042 +0000 UTC m=+26.115279762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.470757 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:28:53.470748112 +0000 UTC m=+26.115294822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.506045 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.506097 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.506108 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.506164 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.506185 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.599699 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:44:16.143981662 +0000 UTC Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.608541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.608603 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.608617 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.608634 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.608646 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.633312 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.633312 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.633463 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:49 crc kubenswrapper[4704]: E0122 16:28:49.633520 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.711567 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.711615 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.711625 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.711647 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.711660 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.786611 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.790999 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.791035 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.792549 4704 generic.go:334] "Generic (PLEG): container finished" podID="6bea4f83-78aa-49a7-a98a-60045d7f4f0f" containerID="1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3" exitCode=0 Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.792584 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerDied","Data":"1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.803750 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.813556 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.813597 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.813607 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.813623 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.813634 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.831369 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.847914 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.863614 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.878106 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.890735 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.900502 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.911487 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.915286 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.915337 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.915349 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.915369 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.915381 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:49Z","lastTransitionTime":"2026-01-22T16:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.924150 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.933860 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.945080 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.955734 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.967165 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.984211 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:49 crc kubenswrapper[4704]: I0122 16:28:49.999767 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:49Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.012574 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.017389 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.017462 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.017475 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.017492 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.017504 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.024666 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.033987 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.043356 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.056638 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.067221 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.078438 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.097425 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.109873 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.119975 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.120029 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.120048 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.120072 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.120091 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.125004 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.135867 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.148843 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.162709 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.223311 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.223356 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.223366 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.223381 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.223390 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.326459 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.326529 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.326542 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.326562 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.326577 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.429391 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.429496 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.429515 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.429538 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.429555 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.532395 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.532457 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.532475 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.532503 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.532520 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.599979 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:45:01.551974534 +0000 UTC Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.613566 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.620883 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.632774 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.632825 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: E0122 16:28:50.632973 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.635665 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.635751 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.635813 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.635847 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.635873 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.646583 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.663544 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.684609 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.701921 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.713205 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.723850 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.736846 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.738888 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.738944 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.738962 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.738985 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.739003 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.752146 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.765001 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.778834 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.800372 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.819601 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.841363 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.841421 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.841436 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.841458 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.841474 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.852641 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.865542 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.881262 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.893110 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.912442 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.923853 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.944347 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.944764 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.944776 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.944806 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.944820 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:50Z","lastTransitionTime":"2026-01-22T16:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.947780 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:50 crc kubenswrapper[4704]: I0122 16:28:50.992785 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.029719 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.047202 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.047235 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.047244 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.047258 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.047267 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.069663 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.106544 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.148336 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.149709 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.149876 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.149986 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.150138 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.150243 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.189465 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.227898 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.253516 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.253560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.253569 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.253583 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.253593 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.268610 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.356575 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.356619 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.356634 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.356656 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.356672 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.459945 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.459994 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.460005 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.460021 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.460033 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.562751 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.562823 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.562836 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.562852 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.562865 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.600461 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:35:04.539380825 +0000 UTC Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.633055 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:51 crc kubenswrapper[4704]: E0122 16:28:51.633222 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.633707 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:51 crc kubenswrapper[4704]: E0122 16:28:51.633950 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.665097 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.665145 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.665154 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.665170 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.665179 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.768733 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.768784 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.768827 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.768849 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.768862 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.806141 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.809297 4704 generic.go:334] "Generic (PLEG): container finished" podID="6bea4f83-78aa-49a7-a98a-60045d7f4f0f" containerID="cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f" exitCode=0 Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.809362 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerDied","Data":"cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.829570 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.851815 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.866222 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.871419 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.871457 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.871468 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.871485 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.871494 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.878779 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.891132 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.901608 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.911350 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.922630 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.938814 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.950782 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.961745 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.973868 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.974166 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.974223 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.974235 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.974252 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.974263 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:51Z","lastTransitionTime":"2026-01-22T16:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.986297 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:51 crc kubenswrapper[4704]: I0122 16:28:51.998512 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.076909 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.076962 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.076974 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.076992 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.077004 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.180235 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.180286 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.180300 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.180318 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.180331 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.283560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.283610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.283622 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.283640 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.283652 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.386405 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.386454 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.386467 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.386485 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.386498 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.489060 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.489099 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.489106 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.489119 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.489127 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.591084 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.591139 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.591163 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.591183 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.591202 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.601494 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:46:43.442753995 +0000 UTC Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.633448 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:52 crc kubenswrapper[4704]: E0122 16:28:52.633604 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.693898 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.693946 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.693961 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.693982 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.693996 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.796881 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.796922 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.796933 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.796948 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.796960 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.817139 4704 generic.go:334] "Generic (PLEG): container finished" podID="6bea4f83-78aa-49a7-a98a-60045d7f4f0f" containerID="1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e" exitCode=0 Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.817233 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerDied","Data":"1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.841861 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.863372 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.877265 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.891126 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.899641 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.899960 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.900057 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.900154 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.900259 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:52Z","lastTransitionTime":"2026-01-22T16:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.904729 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.919585 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.930430 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.940291 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.950915 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.967601 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:52 crc kubenswrapper[4704]: I0122 16:28:52.993275 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.002946 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.002986 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.002997 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.003013 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.003024 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.006785 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:53Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.020134 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:53Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.034104 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:53Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.105299 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.105339 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.105348 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.105364 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.105376 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.208567 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.208602 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.208613 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.208680 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.208694 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.311471 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.311528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.311539 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.311556 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.311567 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.408641 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.408904 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:29:01.408874311 +0000 UTC m=+34.053421011 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.414349 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.414380 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.414398 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.414411 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.414421 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.509847 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.509965 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.509986 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510020 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510034 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.510031 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510087 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:01.510071385 +0000 UTC m=+34.154618085 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.510115 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510164 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510192 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510214 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:01.510208138 +0000 UTC m=+34.154754838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510252 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:01.510225059 +0000 UTC m=+34.154771799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510391 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510418 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510441 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.510499 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:01.510480136 +0000 UTC m=+34.155026886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.516999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.517048 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.517059 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.517076 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.517087 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.602083 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:01:07.391722901 +0000 UTC Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.619163 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.619207 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.619216 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.619232 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.619243 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.633732 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.633871 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.634012 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:53 crc kubenswrapper[4704]: E0122 16:28:53.634188 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.721872 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.721917 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.721926 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.721941 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.721952 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.823599 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.823649 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.823661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.823679 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.823691 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.824176 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerStarted","Data":"38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913"} Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.927673 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.928085 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.928100 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.928118 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:53 crc kubenswrapper[4704]: I0122 16:28:53.928129 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:53Z","lastTransitionTime":"2026-01-22T16:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.031630 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.031668 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.031678 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.031692 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.031701 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.134729 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.134845 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.134860 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.134883 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.134912 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.238730 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.238775 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.238787 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.238824 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.238834 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.341427 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.341467 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.341478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.341496 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.341510 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.444434 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.444504 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.444528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.444557 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.444580 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.547787 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.547894 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.547916 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.547947 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.547968 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.602660 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:02:35.982879284 +0000 UTC Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.633206 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:54 crc kubenswrapper[4704]: E0122 16:28:54.633347 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.651365 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.651527 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.651541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.651560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.651574 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.754847 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.754885 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.754896 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.754914 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.754928 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.834991 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.835341 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.835469 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.835501 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.842032 4704 generic.go:334] "Generic (PLEG): container finished" podID="6bea4f83-78aa-49a7-a98a-60045d7f4f0f" containerID="38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913" exitCode=0 Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.842114 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerDied","Data":"38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.851402 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.857876 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.857910 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.857917 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.857937 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.857946 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.871248 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.871504 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.872240 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.888378 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.918269 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.934380 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.956529 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.962358 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.962439 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.962466 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.962498 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.962531 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:54Z","lastTransitionTime":"2026-01-22T16:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:54 crc kubenswrapper[4704]: I0122 16:28:54.981311 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.001306 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:54Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.018253 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.033411 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.046129 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.056389 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.066691 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.066720 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.066728 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.066742 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.066751 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.070788 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.083473 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.096580 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.115585 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.125917 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.138008 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.150811 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.172086 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.180589 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.180617 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.180626 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.180641 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.180651 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.193450 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.226240 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.242704 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.261248 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.274179 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.283748 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.283820 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.283833 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.283849 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.283864 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.287809 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.300084 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.312219 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.387085 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.387135 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.387151 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.387167 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.387179 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.490727 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.490779 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.490813 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.490836 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.490852 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.594367 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.594427 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.594438 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.594459 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.594472 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.603286 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:09:33.813280659 +0000 UTC Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.633746 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.633852 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:55 crc kubenswrapper[4704]: E0122 16:28:55.634024 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:55 crc kubenswrapper[4704]: E0122 16:28:55.634135 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.697724 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.697786 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.697816 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.697835 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.697847 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.800061 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.800104 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.800115 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.800131 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.800143 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.848065 4704 generic.go:334] "Generic (PLEG): container finished" podID="6bea4f83-78aa-49a7-a98a-60045d7f4f0f" containerID="899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0" exitCode=0 Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.848156 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerDied","Data":"899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.864282 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.882614 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.896139 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.904254 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.904292 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.904304 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.904321 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.904333 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:55Z","lastTransitionTime":"2026-01-22T16:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.917254 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.931370 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.945548 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.961228 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.975247 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:55 crc kubenswrapper[4704]: I0122 16:28:55.990163 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:55Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.004148 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.008718 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.008751 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.008765 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.008782 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.008809 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.016733 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.027747 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.045769 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.062371 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.111830 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.111867 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.111878 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.111894 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.111904 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.214479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.214521 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.214533 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.214551 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.214563 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.316780 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.316828 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.316837 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.316854 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.316865 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.418833 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.418867 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.418876 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.418889 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.418899 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.521486 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.521528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.521569 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.521586 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.521597 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.603442 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:36:25.009908302 +0000 UTC Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.624457 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.624509 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.624522 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.624541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.624564 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.632692 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:56 crc kubenswrapper[4704]: E0122 16:28:56.632840 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.726432 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.726468 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.726477 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.726490 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.726498 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.828828 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.828863 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.828871 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.828885 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.828896 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.853615 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" event={"ID":"6bea4f83-78aa-49a7-a98a-60045d7f4f0f","Type":"ContainerStarted","Data":"1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.867945 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.882281 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.894888 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.921219 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.930730 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.930774 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.930785 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.930823 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.930834 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:56Z","lastTransitionTime":"2026-01-22T16:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.939334 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.954261 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.967561 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.982564 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:56 crc kubenswrapper[4704]: I0122 16:28:56.997449 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:56Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.008816 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.017898 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.027036 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.033128 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.033154 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.033162 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.033176 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.033185 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.037899 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.050114 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.138736 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.139011 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.139121 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.139209 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.139279 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.242470 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.242542 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.242558 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.242575 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.242588 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.344773 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.344847 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.344857 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.344871 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.344881 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.447365 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.447410 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.447422 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.447439 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.447452 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.549841 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.549878 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.549890 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.549906 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.549918 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.604148 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:56:34.294776556 +0000 UTC Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.633505 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.633556 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:57 crc kubenswrapper[4704]: E0122 16:28:57.633648 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:57 crc kubenswrapper[4704]: E0122 16:28:57.633755 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.648631 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.652232 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.652280 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.652292 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.652312 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.652330 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.661462 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.673565 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.700719 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.723750 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.738730 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.751010 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.754352 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.754382 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.754392 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.754408 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.754418 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.783125 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.795094 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.809970 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.822033 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.835041 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.849149 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.856568 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.856629 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.856644 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.856665 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.856681 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.858065 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/0.log" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.861296 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114" exitCode=1 Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.861450 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.862741 4704 scope.go:117] "RemoveContainer" containerID="f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.868285 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.888557 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.902036 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.914531 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.927779 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.940702 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.953711 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.958951 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.959224 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.959302 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.959379 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.959482 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:57Z","lastTransitionTime":"2026-01-22T16:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.967494 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.983161 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:57 crc kubenswrapper[4704]: I0122 16:28:57.996763 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:57Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.007813 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.022343 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.034363 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.045822 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.055513 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.062282 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.062603 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.062719 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.062832 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.062929 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.165363 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.165586 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.165689 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.165771 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.165872 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.268368 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.268407 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.268416 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.268429 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.268439 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.332923 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.345696 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.364538 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.370536 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.370571 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.370582 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.370597 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.370608 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.380763 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.393846 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.412015 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.428813 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.441135 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.456959 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.472666 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.472773 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.472858 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.472877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.472905 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.472923 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.492321 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.508242 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.529027 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.543669 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.556265 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.575283 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.575332 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.575348 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.575371 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.575386 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.604847 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:14:54.152340301 +0000 UTC Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.633187 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:28:58 crc kubenswrapper[4704]: E0122 16:28:58.633307 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.677248 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.677295 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.677306 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.677323 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.677335 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.779698 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.779737 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.779747 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.779761 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.779770 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.866423 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/0.log" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.869123 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.869508 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.882594 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.882635 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.882648 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.882672 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.882686 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.883886 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.896949 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.913619 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.927133 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.940367 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.960185 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.974363 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.985290 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.985347 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.985363 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.985385 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.985401 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:58Z","lastTransitionTime":"2026-01-22T16:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:58 crc kubenswrapper[4704]: I0122 16:28:58.988418 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.000763 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:58Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.011372 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.025928 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.033264 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.033301 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.033310 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.033325 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.033335 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.041741 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.051084 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.134189 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.134225 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.134237 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.134251 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.134261 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.138071 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.146166 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.147573 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc"] Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.147978 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.148917 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.148943 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.148951 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.148964 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.148973 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.151211 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.151215 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.154372 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.162209 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.165036 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.165079 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.165087 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.165101 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.165109 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.170594 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.178200 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.182141 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.182186 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.182199 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.182216 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.182229 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.188709 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.196317 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.196467 4704 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.197974 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.198002 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.198013 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.198027 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.198037 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.205705 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.233097 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.237229 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.237535 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6trt\" (UniqueName: \"kubernetes.io/projected/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-kube-api-access-d6trt\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.237678 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.237811 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.250080 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.261106 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.271266 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.283349 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.298263 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.299864 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.299983 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.300052 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.300119 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.300175 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.313278 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.324415 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.334492 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.338726 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.338890 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.338999 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6trt\" (UniqueName: \"kubernetes.io/projected/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-kube-api-access-d6trt\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.339133 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.339365 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.339501 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.345104 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.347120 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.354993 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6trt\" (UniqueName: \"kubernetes.io/projected/0e1c055c-2596-4053-b9d1-fcc44c50e6e3-kube-api-access-d6trt\") pod \"ovnkube-control-plane-749d76644c-s2xkc\" (UID: \"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.358708 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.370275 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:28:59Z is after 2025-08-24T17:21:41Z" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.402391 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.402436 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.402449 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.402471 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.402483 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.459861 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" Jan 22 16:28:59 crc kubenswrapper[4704]: W0122 16:28:59.472330 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e1c055c_2596_4053_b9d1_fcc44c50e6e3.slice/crio-5c27dcf29028a477ad6aa38af009a1df78d0976bfc895b653bbe3c0dbae2b2da WatchSource:0}: Error finding container 5c27dcf29028a477ad6aa38af009a1df78d0976bfc895b653bbe3c0dbae2b2da: Status 404 returned error can't find the container with id 5c27dcf29028a477ad6aa38af009a1df78d0976bfc895b653bbe3c0dbae2b2da Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.509415 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.509458 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.509474 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.509546 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.509572 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.605658 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:28:12.514560994 +0000 UTC Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.612897 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.612932 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.612944 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.612961 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.612972 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.634180 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.634198 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.634309 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:28:59 crc kubenswrapper[4704]: E0122 16:28:59.634356 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.725234 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.725314 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.725338 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.725369 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.725391 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.828407 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.828478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.828498 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.828524 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.828544 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.876024 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" event={"ID":"0e1c055c-2596-4053-b9d1-fcc44c50e6e3","Type":"ContainerStarted","Data":"5c27dcf29028a477ad6aa38af009a1df78d0976bfc895b653bbe3c0dbae2b2da"} Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.931433 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.931485 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.931502 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.931528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:28:59 crc kubenswrapper[4704]: I0122 16:28:59.931549 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:28:59Z","lastTransitionTime":"2026-01-22T16:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.034338 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.034403 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.034420 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.034444 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.034462 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.137161 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.137232 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.137247 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.137265 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.137317 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.245095 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.245166 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.245192 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.245225 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.245258 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.349059 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.349097 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.349108 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.349124 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.349136 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.454329 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.454385 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.454404 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.454428 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.454445 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.557117 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.557158 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.557170 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.557186 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.557196 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.606566 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:49:50.422292654 +0000 UTC Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.633075 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:00 crc kubenswrapper[4704]: E0122 16:29:00.633440 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.676282 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.676325 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.676339 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.676358 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.676372 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.778701 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.778739 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.778747 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.778762 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.778771 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.881289 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.881407 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.881423 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.881440 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.881452 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.984160 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.984230 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.984255 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.984286 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:00 crc kubenswrapper[4704]: I0122 16:29:00.984306 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:00Z","lastTransitionTime":"2026-01-22T16:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.052853 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-92rrv"] Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.053622 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.053733 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.070927 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.082842 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.086302 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.086341 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.086354 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.086372 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.086384 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.094059 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.104219 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.124780 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.144669 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.157716 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.158079 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjn8\" (UniqueName: \"kubernetes.io/projected/022e2512-8e2d-483f-a733-8681aad464a3-kube-api-access-ftjn8\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.168127 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.184681 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.188230 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.188388 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.188482 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.188571 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.188656 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.200232 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.221088 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.234471 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.247405 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.259443 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.259684 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftjn8\" (UniqueName: \"kubernetes.io/projected/022e2512-8e2d-483f-a733-8681aad464a3-kube-api-access-ftjn8\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.259594 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.259975 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:01.759955508 +0000 UTC m=+34.404502208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.259455 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.271651 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.275082 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftjn8\" (UniqueName: \"kubernetes.io/projected/022e2512-8e2d-483f-a733-8681aad464a3-kube-api-access-ftjn8\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.285536 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.290767 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.290828 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.290840 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.290857 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.290869 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.300619 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.392860 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.393134 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.393232 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.393328 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.393408 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.461755 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.461966 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:29:17.461930523 +0000 UTC m=+50.106477263 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.495486 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.495534 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.495545 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.495562 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.495576 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.563826 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.563882 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.563914 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.563959 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564098 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564152 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:17.564133544 +0000 UTC m=+50.208680244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564147 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564222 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564247 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564273 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564329 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564338 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:17.564309449 +0000 UTC m=+50.208856199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564350 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564174 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564418 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:17.564400732 +0000 UTC m=+50.208947492 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.564454 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:17.564434183 +0000 UTC m=+50.208980973 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.597855 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.597998 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.598030 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.598053 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.598071 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.607534 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:35:08.652912067 +0000 UTC Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.632951 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.633007 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.633102 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.633337 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.700952 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.700985 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.700994 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.701007 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.701016 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.769967 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.770254 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.770365 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:02.770334415 +0000 UTC m=+35.414881145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.803883 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.804085 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.804316 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.804529 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.805151 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.900997 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/1.log" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.902067 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/0.log" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.906370 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8" exitCode=1 Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.906446 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.906522 4704 scope.go:117] "RemoveContainer" containerID="f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.908031 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.908073 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.908089 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.908109 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.908124 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:01Z","lastTransitionTime":"2026-01-22T16:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.908331 4704 scope.go:117] "RemoveContainer" containerID="e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8" Jan 22 16:29:01 crc kubenswrapper[4704]: E0122 16:29:01.908893 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.909111 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" event={"ID":"0e1c055c-2596-4053-b9d1-fcc44c50e6e3","Type":"ContainerStarted","Data":"fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133"} Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.927385 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.948426 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.965199 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.977555 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.987224 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:01 crc kubenswrapper[4704]: I0122 16:29:01.999631 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:01Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.010037 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.010076 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.010095 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.010112 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.010123 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.012208 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.029241 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.048140 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.060514 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.072440 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.087493 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.097919 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.109491 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.111877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.111913 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.111921 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.111936 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.111945 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.121848 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.138494 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.214273 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.214567 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.214642 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.214707 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.214780 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.317731 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.317775 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.317784 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.317821 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.317832 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.421235 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.421273 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.421290 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.421317 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.421332 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.528023 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.528101 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.528122 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.528152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.528174 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.609009 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:39:32.807757882 +0000 UTC Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.630822 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.631175 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.631183 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.631196 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.631205 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.633089 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:02 crc kubenswrapper[4704]: E0122 16:29:02.633176 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.633092 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:02 crc kubenswrapper[4704]: E0122 16:29:02.633296 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.733875 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.734110 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.734180 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.734272 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.734356 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.779768 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:02 crc kubenswrapper[4704]: E0122 16:29:02.779885 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:02 crc kubenswrapper[4704]: E0122 16:29:02.780169 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:04.78014966 +0000 UTC m=+37.424696350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.836957 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.836988 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.836999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.837014 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.837025 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.917612 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/1.log" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.922397 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" event={"ID":"0e1c055c-2596-4053-b9d1-fcc44c50e6e3","Type":"ContainerStarted","Data":"6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.939440 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.939486 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.939499 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.939518 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.939531 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:02Z","lastTransitionTime":"2026-01-22T16:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.942753 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.956030 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.970454 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.984420 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:02 crc kubenswrapper[4704]: I0122 16:29:02.999417 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:02Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.010372 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.019629 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.029962 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.041429 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.041463 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.041491 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.041505 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.041515 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.042983 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.053492 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.063709 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.075947 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.088496 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.113372 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.124452 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.138022 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:03Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.143781 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.143842 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.143854 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.143869 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.143881 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.247292 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.247377 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.247402 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.247432 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.247454 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.350009 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.350064 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.350080 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.350099 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.350116 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.452554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.452610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.452627 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.452647 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.452658 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.555850 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.555918 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.555938 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.555968 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.555986 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.609485 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:52:14.715344688 +0000 UTC Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.633281 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.633367 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:03 crc kubenswrapper[4704]: E0122 16:29:03.633443 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:03 crc kubenswrapper[4704]: E0122 16:29:03.633535 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.658198 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.658240 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.658264 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.658286 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.658299 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.761201 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.761255 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.761272 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.761299 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.761321 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.864506 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.864709 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.864745 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.864864 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.864922 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.973053 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.973101 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.973112 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.973127 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:03 crc kubenswrapper[4704]: I0122 16:29:03.973138 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:03Z","lastTransitionTime":"2026-01-22T16:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.075922 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.075981 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.076005 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.076030 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.076052 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.178850 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.178899 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.178910 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.178928 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.178938 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.280892 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.280936 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.280948 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.280964 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.280974 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.383321 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.383369 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.383386 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.383408 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.383425 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.486141 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.486178 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.486188 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.486204 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.486214 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.588457 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.588494 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.588502 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.588531 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.588541 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.610252 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:56:53.722063608 +0000 UTC Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.632750 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.632867 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:04 crc kubenswrapper[4704]: E0122 16:29:04.632953 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:04 crc kubenswrapper[4704]: E0122 16:29:04.633027 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.690916 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.690977 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.690993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.691057 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.691072 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.793862 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.793910 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.793920 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.793936 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.793947 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.803441 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:04 crc kubenswrapper[4704]: E0122 16:29:04.803584 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:04 crc kubenswrapper[4704]: E0122 16:29:04.803654 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:08.803635906 +0000 UTC m=+41.448182616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.896684 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.896730 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.896741 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.896759 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.896770 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.999435 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.999479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.999491 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.999508 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:04 crc kubenswrapper[4704]: I0122 16:29:04.999520 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:04Z","lastTransitionTime":"2026-01-22T16:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.110696 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.110760 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.110774 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.110812 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.110829 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.212827 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.212880 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.212889 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.212901 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.212910 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.314816 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.314844 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.314852 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.314866 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.314875 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.417076 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.417422 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.417509 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.417602 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.417671 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.520531 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.520850 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.520937 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.521035 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.521134 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.611214 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:14:56.209398109 +0000 UTC Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.623460 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.623542 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.623564 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.623594 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.623612 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.633741 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.633771 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:05 crc kubenswrapper[4704]: E0122 16:29:05.633935 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:05 crc kubenswrapper[4704]: E0122 16:29:05.633982 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.725476 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.725518 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.725530 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.725545 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.725563 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.828416 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.828463 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.828479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.828499 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.828514 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.930361 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.930404 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.930421 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.930440 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:05 crc kubenswrapper[4704]: I0122 16:29:05.930455 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:05Z","lastTransitionTime":"2026-01-22T16:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.033030 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.033378 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.033482 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.033577 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.033661 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.136674 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.137166 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.137356 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.137532 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.137689 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.241121 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.241596 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.241684 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.241773 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.241893 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.344590 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.344637 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.344647 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.344663 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.344675 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.446855 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.446892 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.446903 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.446918 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.446929 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.548884 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.548925 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.548934 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.548949 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.548959 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.611863 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:55:09.0659339 +0000 UTC Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.633436 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.633461 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:06 crc kubenswrapper[4704]: E0122 16:29:06.633582 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:06 crc kubenswrapper[4704]: E0122 16:29:06.633717 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.651547 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.651843 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.652025 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.652212 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.652356 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.755497 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.755541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.755554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.755570 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.755581 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.858213 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.858253 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.858264 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.858280 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.858292 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.961357 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.961392 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.961401 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.961417 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:06 crc kubenswrapper[4704]: I0122 16:29:06.961427 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:06Z","lastTransitionTime":"2026-01-22T16:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.064056 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.064094 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.064102 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.064114 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.064123 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.166757 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.166816 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.166826 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.166841 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.166851 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.269337 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.269392 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.269402 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.269439 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.269449 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.371655 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.372408 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.372483 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.372554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.372616 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.474644 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.474675 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.474685 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.474701 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.474713 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.577181 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.577234 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.577244 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.577260 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.577271 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.612917 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:16:17.218174368 +0000 UTC Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.634057 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.634130 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:07 crc kubenswrapper[4704]: E0122 16:29:07.634297 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:07 crc kubenswrapper[4704]: E0122 16:29:07.634490 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.659484 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.679737 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.679779 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.679817 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.679837 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.679849 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.682488 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.697246 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.718131 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.738404 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.750154 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.765591 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.780997 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.782478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.782570 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.782628 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.782689 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.782782 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.808081 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42132db8262b4e19f9f73e25b328d5b09016912733df64c5c38728293fff114\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"message\\\":\\\"oval\\\\nI0122 16:28:56.880169 5915 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 16:28:56.880219 5915 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 16:28:56.880224 5915 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 16:28:56.880231 5915 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 16:28:56.880259 5915 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0122 16:28:56.880261 5915 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:56.880269 5915 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 16:28:56.880283 5915 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 16:28:56.880293 5915 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 16:28:56.880300 5915 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 16:28:56.880309 5915 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 16:28:56.880312 5915 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0122 16:28:56.880322 5915 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 16:28:56.880372 5915 factory.go:656] Stopping watch factory\\\\nI0122 16:28:56.880390 5915 ovnkube.go:599] Stopped ovnkube\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.820714 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.832076 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.843765 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.858070 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.871625 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.885758 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.885812 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.885823 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.885838 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.885849 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.886124 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.901483 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:07Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.988988 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.989040 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.989051 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.989067 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:07 crc kubenswrapper[4704]: I0122 16:29:07.989079 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:07Z","lastTransitionTime":"2026-01-22T16:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.091610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.091650 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.091660 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.091676 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.091688 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.194486 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.194538 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.194554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.194576 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.194592 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.296855 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.297110 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.297190 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.297276 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.297353 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.400042 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.400118 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.400129 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.400143 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.400153 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.502661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.502702 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.502714 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.502728 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.502739 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.604993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.605036 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.605047 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.605065 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.605077 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.613512 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:27:24.521485982 +0000 UTC Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.632846 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.632897 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:08 crc kubenswrapper[4704]: E0122 16:29:08.633259 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:08 crc kubenswrapper[4704]: E0122 16:29:08.633108 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.708053 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.708102 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.708114 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.708134 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.708151 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.810842 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.810876 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.810888 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.810903 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.810916 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.852154 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:08 crc kubenswrapper[4704]: E0122 16:29:08.852326 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:08 crc kubenswrapper[4704]: E0122 16:29:08.852660 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:16.852634714 +0000 UTC m=+49.497181434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.913635 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.913683 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.913697 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.913715 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:08 crc kubenswrapper[4704]: I0122 16:29:08.913728 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:08Z","lastTransitionTime":"2026-01-22T16:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.016338 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.016379 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.016391 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.016406 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.016416 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.119173 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.119257 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.119280 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.119320 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.119344 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.221500 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.221549 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.221563 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.221580 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.221591 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.324034 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.324084 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.324095 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.324132 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.324143 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.425953 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.426002 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.426013 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.426030 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.426041 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.527639 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.527667 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.527676 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.527689 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.527698 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.594239 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.594327 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.594353 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.594384 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.594408 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.611617 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.613971 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:35:18.724734182 +0000 UTC Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.615438 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.615479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.615493 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.615511 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.615523 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.627259 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.631966 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.632032 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.632048 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.632073 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.632087 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.633277 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.633296 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.633473 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.633671 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.645493 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.649304 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.649354 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.649367 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.649386 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.649402 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.662262 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.666499 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.666561 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.666575 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.666595 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.666608 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.680959 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:09Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:09 crc kubenswrapper[4704]: E0122 16:29:09.681078 4704 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.682662 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.682691 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.682704 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.682722 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.682735 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.785730 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.785773 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.785785 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.785825 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.785841 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.888506 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.888881 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.888980 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.889060 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.889141 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.992234 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.992284 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.992335 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.992357 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:09 crc kubenswrapper[4704]: I0122 16:29:09.992369 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:09Z","lastTransitionTime":"2026-01-22T16:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.095850 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.095891 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.095901 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.095922 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.095934 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.199153 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.199195 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.199208 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.199226 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.199239 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.302453 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.302526 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.302542 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.302568 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.302586 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.405757 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.405875 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.405902 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.405935 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.405958 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.514559 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.514614 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.514630 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.514649 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.514665 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.614256 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:41:12.059277446 +0000 UTC Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.617289 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.617467 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.617509 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.617566 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.617591 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.633067 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.633188 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:10 crc kubenswrapper[4704]: E0122 16:29:10.633285 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:10 crc kubenswrapper[4704]: E0122 16:29:10.633493 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.720567 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.720619 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.720634 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.720655 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.720670 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.823439 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.823517 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.823531 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.823547 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.823559 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.926244 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.926278 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.926287 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.926301 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:10 crc kubenswrapper[4704]: I0122 16:29:10.926311 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:10Z","lastTransitionTime":"2026-01-22T16:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.029278 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.029556 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.029687 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.029773 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.029873 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.132356 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.132704 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.132823 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.132959 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.133145 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.236464 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.236903 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.237144 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.237303 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.237490 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.339956 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.339993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.340002 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.340019 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.340030 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.442915 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.442958 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.442969 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.442983 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.442995 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.545617 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.545667 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.545683 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.545706 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.545727 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.614989 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:45:54.81483958 +0000 UTC Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.633453 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:11 crc kubenswrapper[4704]: E0122 16:29:11.633585 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.633453 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:11 crc kubenswrapper[4704]: E0122 16:29:11.633866 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.647970 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.647999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.648010 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.648025 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.648039 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.751096 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.751128 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.751137 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.751158 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.751171 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.852768 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.852812 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.852821 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.852832 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.852841 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.954555 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.954591 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.954602 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.954623 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:11 crc kubenswrapper[4704]: I0122 16:29:11.954636 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:11Z","lastTransitionTime":"2026-01-22T16:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.057694 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.057745 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.057761 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.057782 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.057822 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.160432 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.160495 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.160561 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.160591 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.160613 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.264595 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.264656 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.264666 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.264687 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.264700 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.367607 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.367639 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.367652 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.367668 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.367678 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.488125 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.488164 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.488174 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.488187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.488196 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.591071 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.591124 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.591135 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.591154 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.591167 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.615553 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:14:07.986218215 +0000 UTC Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.632880 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.632924 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:12 crc kubenswrapper[4704]: E0122 16:29:12.633033 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:12 crc kubenswrapper[4704]: E0122 16:29:12.633210 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.693440 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.693478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.693488 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.693500 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.693508 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.797471 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.797550 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.797578 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.797604 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.797620 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.900205 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.900248 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.900257 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.900275 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:12 crc kubenswrapper[4704]: I0122 16:29:12.900284 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:12Z","lastTransitionTime":"2026-01-22T16:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.002984 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.003034 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.003047 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.003064 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.003435 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.105588 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.105619 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.105628 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.105660 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.105671 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.215636 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.215686 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.215699 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.215717 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.215736 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.317653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.317685 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.317732 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.317745 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.317753 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.420915 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.420964 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.420975 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.420993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.421006 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.523405 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.523444 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.523452 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.523464 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.523473 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.616725 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:53:50.198848519 +0000 UTC Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.626037 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.626083 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.626098 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.626116 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.626128 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.633559 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.633670 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:13 crc kubenswrapper[4704]: E0122 16:29:13.633842 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:13 crc kubenswrapper[4704]: E0122 16:29:13.634026 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.728445 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.728516 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.728529 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.728554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.728569 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.832268 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.832325 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.832337 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.832361 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.832373 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.935966 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.936034 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.936054 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.936079 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:13 crc kubenswrapper[4704]: I0122 16:29:13.936096 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:13Z","lastTransitionTime":"2026-01-22T16:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.039125 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.039191 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.039231 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.039259 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.039281 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.142008 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.142056 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.142067 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.142084 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.142096 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.245257 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.245290 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.245299 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.245316 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.245326 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.347314 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.347355 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.347373 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.347388 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.347398 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.449685 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.449749 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.449767 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.449824 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.449843 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.552883 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.552915 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.552923 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.552937 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.552945 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.617243 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:24:04.853460119 +0000 UTC Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.632863 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:14 crc kubenswrapper[4704]: E0122 16:29:14.632999 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.632882 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:14 crc kubenswrapper[4704]: E0122 16:29:14.633259 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.655654 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.655686 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.655696 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.655709 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.655717 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.758311 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.758350 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.758358 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.758372 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.758380 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.860930 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.860971 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.860982 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.861001 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.861012 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.963031 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.963102 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.963125 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.963156 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:14 crc kubenswrapper[4704]: I0122 16:29:14.963223 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:14Z","lastTransitionTime":"2026-01-22T16:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.065270 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.065334 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.065346 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.065364 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.065377 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.168469 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.168521 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.168533 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.168550 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.168559 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.271354 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.271392 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.271407 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.271427 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.271441 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.373878 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.373928 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.373944 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.373967 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.373983 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.475770 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.475872 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.475890 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.475917 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.475936 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.578853 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.578947 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.578965 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.578988 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.579006 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.617985 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:09:00.674419356 +0000 UTC Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.633565 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:15 crc kubenswrapper[4704]: E0122 16:29:15.633702 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.634685 4704 scope.go:117] "RemoveContainer" containerID="e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.634908 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:15 crc kubenswrapper[4704]: E0122 16:29:15.635521 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.652259 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.670560 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.682172 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.682451 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.682462 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.682479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.682491 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.684920 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.699177 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.713441 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.724615 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.741481 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.755633 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.767149 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.778508 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.785221 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.785364 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.785381 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.785399 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.785411 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.795396 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.814999 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.827196 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.846155 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.858883 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.871975 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.887677 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.887719 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.887730 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.887747 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.887761 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.971100 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/1.log" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.974851 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32"} Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.975405 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.989162 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:15Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.990128 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.990184 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.990198 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.990221 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:15 crc kubenswrapper[4704]: I0122 16:29:15.990235 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:15Z","lastTransitionTime":"2026-01-22T16:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.006335 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.022921 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.041990 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.068387 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.092850 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.092907 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.092922 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.092939 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.093337 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.095988 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.121344 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.149301 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.162260 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.187507 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.196218 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.196272 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.196283 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.196300 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.196311 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.221807 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.243277 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.258261 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.274723 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.294305 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.299146 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.299181 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.299192 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.299221 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.299232 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.305503 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:16Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.401995 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.402038 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.402048 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.402064 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.402074 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.505108 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.505164 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.505175 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.505194 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.505206 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.608089 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.608132 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.608145 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.608163 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.608173 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.619081 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:20:44.359845747 +0000 UTC Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.633450 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.633444 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:16 crc kubenswrapper[4704]: E0122 16:29:16.633656 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:16 crc kubenswrapper[4704]: E0122 16:29:16.633758 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.711686 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.711732 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.711745 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.711764 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.711777 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.815302 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.815393 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.815410 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.815441 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.815478 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.918383 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.918459 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.918478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.918509 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.918529 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:16Z","lastTransitionTime":"2026-01-22T16:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:16 crc kubenswrapper[4704]: I0122 16:29:16.950341 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:16 crc kubenswrapper[4704]: E0122 16:29:16.950534 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:16 crc kubenswrapper[4704]: E0122 16:29:16.950645 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:32.950618167 +0000 UTC m=+65.595164897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.021540 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.021610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.021631 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.021660 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.021683 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.124298 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.124357 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.124367 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.124386 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.124401 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.228625 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.228690 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.228704 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.228725 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.228738 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.332374 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.332440 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.332455 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.332474 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.332487 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.436274 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.436356 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.436373 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.436398 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.436419 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.538870 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.538920 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.538935 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.538956 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.538973 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.557374 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.557488 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:29:49.557464878 +0000 UTC m=+82.202011618 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.619714 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:41:56.434074066 +0000 UTC Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.633292 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.633307 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.633541 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.633613 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.640984 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.641023 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.641033 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.641048 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.641060 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.650221 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.663019 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.663137 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.663175 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.663221 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.663352 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.663451 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:49.663420812 +0000 UTC m=+82.307967552 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.663874 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.663973 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:49.663949947 +0000 UTC m=+82.308496677 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664308 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664362 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664387 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664470 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:49.66444813 +0000 UTC m=+82.308994870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664486 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664533 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664553 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.664646 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:29:49.664615675 +0000 UTC m=+82.309162405 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.669626 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.681180 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.693841 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.705697 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.720295 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.735784 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.745097 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.745161 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.745175 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.745217 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.745231 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.748981 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.761582 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.779276 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.791679 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.802502 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.813989 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.827130 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.840609 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.848072 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.848124 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.848144 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.848162 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.848174 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.856058 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:17Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.951531 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.951580 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.951595 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.951616 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.951629 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:17Z","lastTransitionTime":"2026-01-22T16:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.986110 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/2.log" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.987281 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/1.log" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.991704 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32" exitCode=1 Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.991758 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32"} Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.991843 4704 scope.go:117] "RemoveContainer" containerID="e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8" Jan 22 16:29:17 crc kubenswrapper[4704]: I0122 16:29:17.992772 4704 scope.go:117] "RemoveContainer" containerID="2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32" Jan 22 16:29:17 crc kubenswrapper[4704]: E0122 16:29:17.993006 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.016992 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.032944 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.054909 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.054963 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.054979 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.055001 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.055018 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.056863 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.093852 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.109463 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.129454 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.154234 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.157359 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.157430 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.157447 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.157468 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.157484 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.211437 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.231154 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.252562 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.261124 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.261195 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.261215 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.261238 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.261253 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.266297 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.276664 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.287694 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.304688 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.318067 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.329063 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:18Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.364592 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.364631 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.364641 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.364656 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.364666 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.467556 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.467603 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.467616 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.467632 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.467644 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.570528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.570579 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.570588 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.570602 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.570610 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.619899 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:40:47.575747983 +0000 UTC Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.633230 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:18 crc kubenswrapper[4704]: E0122 16:29:18.633374 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.633248 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:18 crc kubenswrapper[4704]: E0122 16:29:18.633700 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.673856 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.673914 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.673931 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.673953 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.673969 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.776766 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.776854 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.776867 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.776886 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.776899 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.879503 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.879546 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.879554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.879566 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.879575 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.983000 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.983036 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.983045 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.983060 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:18 crc kubenswrapper[4704]: I0122 16:29:18.983071 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:18Z","lastTransitionTime":"2026-01-22T16:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.004585 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/2.log" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.085744 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.085889 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.085914 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.085942 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.085959 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.189005 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.189420 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.189438 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.189463 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.189496 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.293118 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.293212 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.293239 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.293269 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.293294 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.396524 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.396566 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.396577 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.396592 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.396602 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.498721 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.498759 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.498772 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.498786 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.498818 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.602062 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.602468 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.602485 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.602510 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.602531 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.621091 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:12:21.119736351 +0000 UTC Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.633553 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:19 crc kubenswrapper[4704]: E0122 16:29:19.633770 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.633579 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:19 crc kubenswrapper[4704]: E0122 16:29:19.634297 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.705304 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.705354 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.705369 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.705390 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.705406 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.808563 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.808610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.808624 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.808642 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.808658 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.911455 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.911509 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.911526 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.911549 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.911565 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.963906 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.963979 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.964079 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.964108 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.964125 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:19 crc kubenswrapper[4704]: E0122 16:29:19.986224 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:19Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.991408 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.991440 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.991451 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.991467 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:19 crc kubenswrapper[4704]: I0122 16:29:19.991480 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:19Z","lastTransitionTime":"2026-01-22T16:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: E0122 16:29:20.011237 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.017463 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.017528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.017550 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.017582 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.017605 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: E0122 16:29:20.034290 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.038676 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.038744 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.038766 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.038828 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.038854 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: E0122 16:29:20.057957 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.063493 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.063555 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.063569 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.063596 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.063613 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: E0122 16:29:20.084271 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:20Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:20 crc kubenswrapper[4704]: E0122 16:29:20.084507 4704 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.091783 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.091879 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.091918 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.091952 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.091978 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.195686 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.195731 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.195744 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.195763 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.195775 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.298611 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.298642 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.298653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.298669 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.298680 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.401372 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.401437 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.401458 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.401484 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.401504 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.505088 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.505141 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.505152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.505170 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.505182 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.608243 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.608312 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.608336 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.608366 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.608386 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.622082 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:38:13.775501074 +0000 UTC Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.633620 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.633650 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:20 crc kubenswrapper[4704]: E0122 16:29:20.633941 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:20 crc kubenswrapper[4704]: E0122 16:29:20.634144 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.710586 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.710629 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.710642 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.710659 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.710671 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.813293 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.813342 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.813357 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.813374 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.813385 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.915740 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.915834 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.915861 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.915893 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:20 crc kubenswrapper[4704]: I0122 16:29:20.915914 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:20Z","lastTransitionTime":"2026-01-22T16:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.018650 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.018721 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.018742 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.018768 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.018840 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.121979 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.122055 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.122080 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.122110 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.122131 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.235937 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.235991 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.236000 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.236014 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.236038 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.339221 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.339300 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.339322 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.339350 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.339371 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.442911 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.442991 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.443026 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.443055 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.443076 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.545860 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.545921 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.545959 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.545993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.546015 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.622890 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:04:32.271448099 +0000 UTC Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.633478 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:21 crc kubenswrapper[4704]: E0122 16:29:21.633597 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.633612 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:21 crc kubenswrapper[4704]: E0122 16:29:21.633735 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.648055 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.648085 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.648100 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.648113 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.648123 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.750620 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.750662 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.750673 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.750690 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.750705 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.853932 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.853972 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.853980 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.853993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.854007 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.956630 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.956673 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.956686 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.956700 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:21 crc kubenswrapper[4704]: I0122 16:29:21.956709 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:21Z","lastTransitionTime":"2026-01-22T16:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.059265 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.059301 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.059311 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.059325 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.059334 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.162629 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.162695 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.162712 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.162737 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.162757 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.265140 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.265187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.265198 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.265214 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.265225 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.367406 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.367502 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.367527 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.367563 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.367586 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.473307 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.473382 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.473400 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.473425 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.473449 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.577097 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.577158 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.577176 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.577263 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.577392 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.623705 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:06:17.776020161 +0000 UTC Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.633402 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.633409 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:22 crc kubenswrapper[4704]: E0122 16:29:22.633561 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:22 crc kubenswrapper[4704]: E0122 16:29:22.633858 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.679659 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.679731 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.679752 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.679782 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.679827 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.782437 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.782493 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.782520 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.782543 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.782561 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.787444 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.801865 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.827095 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.848634 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.866458 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.884254 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.884488 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.884567 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.884625 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.884654 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.884669 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.896198 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.923891 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.938448 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.953957 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.977670 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.987498 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.987541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.987556 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.987577 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.987593 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:22Z","lastTransitionTime":"2026-01-22T16:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:22 crc kubenswrapper[4704]: I0122 16:29:22.992014 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:22Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.005810 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:23Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.029202 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:23Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.041032 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:23Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.057505 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:23Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.080132 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:23Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.090748 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.090808 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.090826 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.090849 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.090864 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.094932 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:23Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.193467 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.193519 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.193530 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.193548 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.193558 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.296361 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.296427 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.296443 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.296468 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.296491 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.399766 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.399855 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.399874 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.399900 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.399918 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.503710 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.503781 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.503835 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.503871 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.503888 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.607081 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.607168 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.607187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.607217 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.607232 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.624960 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:46:11.056427373 +0000 UTC Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.633445 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.633481 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:23 crc kubenswrapper[4704]: E0122 16:29:23.633671 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:23 crc kubenswrapper[4704]: E0122 16:29:23.633762 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.709160 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.709202 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.709212 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.709224 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.709233 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.811397 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.811459 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.811483 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.811513 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.811539 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.914904 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.915004 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.915025 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.915051 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:23 crc kubenswrapper[4704]: I0122 16:29:23.915069 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:23Z","lastTransitionTime":"2026-01-22T16:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.018013 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.018415 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.018610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.018870 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.019123 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.122702 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.122841 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.122906 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.122939 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.122962 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.225979 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.226029 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.226039 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.226055 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.226067 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.328145 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.328193 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.328204 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.328223 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.328234 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.431464 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.431534 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.431556 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.431583 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.431604 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.534562 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.534633 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.534652 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.534712 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.534731 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.626094 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:08:48.137395348 +0000 UTC Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.633568 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.633590 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:24 crc kubenswrapper[4704]: E0122 16:29:24.634045 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:24 crc kubenswrapper[4704]: E0122 16:29:24.634221 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.637093 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.637223 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.637246 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.637275 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.637294 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.739878 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.739949 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.739975 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.740004 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.740029 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.843420 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.843456 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.843465 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.843477 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.843486 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.946243 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.946315 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.946334 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.946360 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:24 crc kubenswrapper[4704]: I0122 16:29:24.946379 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:24Z","lastTransitionTime":"2026-01-22T16:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.049333 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.049403 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.049431 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.049461 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.049483 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.152348 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.152388 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.152399 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.152414 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.152425 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.255406 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.255447 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.255460 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.255475 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.255488 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.358159 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.359606 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.359644 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.359664 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.359675 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.463370 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.463439 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.463462 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.463503 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.463526 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.566772 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.566877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.566900 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.566930 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.566951 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.626947 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:40:22.581435325 +0000 UTC Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.633592 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:25 crc kubenswrapper[4704]: E0122 16:29:25.633786 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.633920 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:25 crc kubenswrapper[4704]: E0122 16:29:25.634114 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.669887 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.669943 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.669956 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.669977 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.669988 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.773329 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.773380 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.773389 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.773405 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.773416 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.876611 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.876674 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.876692 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.876718 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.876737 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.980388 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.980507 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.980542 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.980585 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:25 crc kubenswrapper[4704]: I0122 16:29:25.980604 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:25Z","lastTransitionTime":"2026-01-22T16:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.083430 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.083763 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.083895 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.083996 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.084167 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.187343 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.187661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.187895 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.188103 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.188280 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.291443 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.291821 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.292033 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.292226 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.292451 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.395033 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.395064 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.395074 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.395089 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.395101 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.497174 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.497224 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.497238 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.497258 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.497273 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.600580 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.600622 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.600631 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.600647 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.600655 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.627109 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:10:40.994001351 +0000 UTC Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.633572 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.633614 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:26 crc kubenswrapper[4704]: E0122 16:29:26.633962 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:26 crc kubenswrapper[4704]: E0122 16:29:26.633966 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.703611 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.703656 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.703672 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.703694 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.703712 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.806525 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.806598 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.806621 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.806653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.806678 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.910645 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.910768 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.910877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.910906 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:26 crc kubenswrapper[4704]: I0122 16:29:26.910929 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:26Z","lastTransitionTime":"2026-01-22T16:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.013746 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.013850 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.013868 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.013890 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.013906 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.116923 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.116971 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.116988 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.117009 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.117028 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.220023 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.220149 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.220169 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.220197 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.220216 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.322669 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.322742 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.322765 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.322829 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.322854 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.425625 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.425674 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.425685 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.425704 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.425719 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.527879 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.527931 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.527947 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.527969 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.527989 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.627317 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:47:05.45008219 +0000 UTC Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.631233 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.631301 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.631314 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.631338 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.631350 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.633035 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:27 crc kubenswrapper[4704]: E0122 16:29:27.633262 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.633343 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:27 crc kubenswrapper[4704]: E0122 16:29:27.633727 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.652219 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.668977 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.682730 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.711775 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2f76d1e5d66aad6e6b0a7bf793b19cf0d1b7ed32d79287019f711482187c1b8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0122 16:28:59.627162 6119 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 16:28:59.627148 6119 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0122 16:28:59.627182 6119 services_controller.go:443] Built service openshift-config-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.161\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0122 16:28:59.627198 6119 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.728682 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.734125 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.734187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.734204 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.734229 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.734272 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.743777 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.760862 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.780266 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.799222 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.822076 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.837282 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.837335 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.837351 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.837372 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.837387 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.839051 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.855134 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.867043 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.877657 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.890472 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.905590 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.916478 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:27Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.939999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.940062 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.940072 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.940085 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:27 crc kubenswrapper[4704]: I0122 16:29:27.940093 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:27Z","lastTransitionTime":"2026-01-22T16:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.041877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.041913 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.041924 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.041941 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.041953 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.144647 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.144749 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.144759 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.144775 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.144785 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.247437 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.247491 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.247503 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.247522 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.247535 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.349997 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.350037 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.350049 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.350065 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.350077 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.452616 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.452688 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.452706 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.452732 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.452749 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.556027 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.556126 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.556156 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.556204 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.556233 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.627984 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:15:09.211754762 +0000 UTC Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.633299 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.633311 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:28 crc kubenswrapper[4704]: E0122 16:29:28.633411 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:28 crc kubenswrapper[4704]: E0122 16:29:28.633491 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.658573 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.658605 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.658613 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.658625 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.658635 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.761228 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.761282 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.761291 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.761304 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.761315 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.863654 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.863702 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.863713 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.863730 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.863742 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.966421 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.966478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.966521 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.966545 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:28 crc kubenswrapper[4704]: I0122 16:29:28.966556 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:28Z","lastTransitionTime":"2026-01-22T16:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.069214 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.069278 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.069297 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.069324 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.069343 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.172751 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.172839 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.172859 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.172886 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.172903 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.274941 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.274974 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.274982 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.274995 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.275004 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.378515 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.378576 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.378587 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.378603 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.378649 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.481404 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.481458 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.481479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.481505 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.481524 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.584616 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.584673 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.584693 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.584720 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.584741 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.628501 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:58:42.224256727 +0000 UTC Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.634166 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.634203 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:29 crc kubenswrapper[4704]: E0122 16:29:29.634330 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:29 crc kubenswrapper[4704]: E0122 16:29:29.634428 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.687592 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.687629 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.687643 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.687661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.687675 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.790368 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.790396 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.790405 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.790418 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.790427 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.892749 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.892852 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.892874 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.892895 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.892911 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.996225 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.996277 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.996296 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.996321 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:29 crc kubenswrapper[4704]: I0122 16:29:29.996339 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:29Z","lastTransitionTime":"2026-01-22T16:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.099619 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.099851 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.099875 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.099898 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.099943 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.203564 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.203615 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.203623 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.203641 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.203651 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.268891 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.268922 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.268931 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.268945 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.268954 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.290584 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.295623 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.295675 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.295692 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.295719 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.295735 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.317401 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.323211 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.323301 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.323319 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.323343 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.323389 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.345123 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.350071 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.350139 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.350155 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.350182 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.350201 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.367082 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.372047 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.372088 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.372098 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.372114 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.372127 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.385637 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.385756 4704 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.387187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.387213 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.387225 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.387240 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.387249 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.489457 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.489527 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.489541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.489560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.489572 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.592586 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.592651 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.592663 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.592678 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.592688 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.629133 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:28:34.208152622 +0000 UTC Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.633506 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.633518 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.633656 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:30 crc kubenswrapper[4704]: E0122 16:29:30.633727 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.695508 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.695549 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.695557 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.695570 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.695580 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.798068 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.798137 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.798154 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.798177 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.798194 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.900282 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.900341 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.900358 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.900381 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:30 crc kubenswrapper[4704]: I0122 16:29:30.900397 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:30Z","lastTransitionTime":"2026-01-22T16:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.003132 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.003187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.003206 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.003231 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.003249 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.105902 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.105952 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.105970 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.105992 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.106006 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.208586 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.208653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.208670 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.208693 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.208710 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.311310 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.311353 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.311364 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.311382 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.311393 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.413190 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.413481 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.413557 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.413576 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.413589 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.515999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.516031 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.516040 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.516057 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.516068 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.618223 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.618476 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.618559 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.618635 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.618703 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.629741 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:20:07.355450428 +0000 UTC Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.633035 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:31 crc kubenswrapper[4704]: E0122 16:29:31.633186 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.633048 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:31 crc kubenswrapper[4704]: E0122 16:29:31.633324 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.725275 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.725312 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.725323 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.725340 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.725351 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.827523 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.827564 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.827576 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.827611 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.827622 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.930588 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.930636 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.930646 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.930665 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:31 crc kubenswrapper[4704]: I0122 16:29:31.930674 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:31Z","lastTransitionTime":"2026-01-22T16:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.034387 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.034444 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.034461 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.034485 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.034502 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.136654 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.136688 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.136699 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.136714 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.136761 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.238720 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.238765 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.238777 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.238812 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.238826 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.340686 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.340738 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.340749 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.340763 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.340772 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.443570 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.443607 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.443616 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.443631 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.443641 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.545992 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.546041 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.546057 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.546079 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.546093 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.630811 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:18:44.60174963 +0000 UTC Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.633113 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.633259 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:32 crc kubenswrapper[4704]: E0122 16:29:32.633571 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:32 crc kubenswrapper[4704]: E0122 16:29:32.633754 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.648525 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.648567 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.648577 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.648591 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.648600 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.750896 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.750944 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.750954 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.750969 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.750982 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.853512 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.853545 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.853556 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.853570 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.853578 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.955557 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.955592 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.955602 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.955615 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:32 crc kubenswrapper[4704]: I0122 16:29:32.955624 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:32Z","lastTransitionTime":"2026-01-22T16:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.040563 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:33 crc kubenswrapper[4704]: E0122 16:29:33.040689 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:33 crc kubenswrapper[4704]: E0122 16:29:33.040736 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:05.040721635 +0000 UTC m=+97.685268335 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.058188 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.058249 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.058261 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.058286 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.058298 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.160279 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.160363 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.160376 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.160397 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.160412 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.264086 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.264159 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.264176 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.264204 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.264239 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.366920 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.366999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.367011 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.367041 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.367063 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.469860 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.469899 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.469910 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.469924 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.469935 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.572842 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.572909 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.572921 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.572946 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.572962 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.631009 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:54:27.489397583 +0000 UTC Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.633553 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.633544 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:33 crc kubenswrapper[4704]: E0122 16:29:33.633951 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:33 crc kubenswrapper[4704]: E0122 16:29:33.634115 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.634244 4704 scope.go:117] "RemoveContainer" containerID="2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32" Jan 22 16:29:33 crc kubenswrapper[4704]: E0122 16:29:33.634523 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.652039 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.662223 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.672610 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.675300 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.675361 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.675382 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.675430 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.675449 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.686392 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.700966 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.712156 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.724653 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.737557 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.753702 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.764635 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.777409 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.777455 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.777467 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.777482 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.777492 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.783378 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.794623 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.806294 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.816655 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.828171 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.840900 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.853753 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.879394 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.879656 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.879740 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.879839 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.879927 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.982655 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.982708 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.982724 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.982746 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:33 crc kubenswrapper[4704]: I0122 16:29:33.982762 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:33Z","lastTransitionTime":"2026-01-22T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.068201 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/0.log" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.068252 4704 generic.go:334] "Generic (PLEG): container finished" podID="9357b7a7-d902-4f7e-97b9-b0a7871ec95e" containerID="4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7" exitCode=1 Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.068282 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerDied","Data":"4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.068598 4704 scope.go:117] "RemoveContainer" containerID="4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.081110 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.085246 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.085293 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.085305 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.085323 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.085334 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.094672 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.105450 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.115944 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.129639 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:33Z\\\",\\\"message\\\":\\\"2026-01-22T16:28:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc\\\\n2026-01-22T16:28:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc to /host/opt/cni/bin/\\\\n2026-01-22T16:28:48Z [verbose] multus-daemon started\\\\n2026-01-22T16:28:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:29:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.139486 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.148966 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.160494 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.172215 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.185145 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.187594 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.187703 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.187770 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.187874 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.187937 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.202678 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.212973 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.222602 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.233255 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.244313 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.254783 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.271577 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.290147 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.290289 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.290354 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.290425 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.290495 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.392645 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.393061 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.393212 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.393351 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.393483 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.495927 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.495971 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.495982 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.495999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.496009 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.598466 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.598494 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.598504 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.598518 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.598530 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.631529 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:26:31.768094955 +0000 UTC Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.632774 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:34 crc kubenswrapper[4704]: E0122 16:29:34.632905 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.633117 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:34 crc kubenswrapper[4704]: E0122 16:29:34.633355 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.700666 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.700707 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.700716 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.700731 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.700740 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.802624 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.802660 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.802668 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.802680 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.802689 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.904239 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.904278 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.904286 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.904299 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:34 crc kubenswrapper[4704]: I0122 16:29:34.904309 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:34Z","lastTransitionTime":"2026-01-22T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.006392 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.006477 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.006501 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.006530 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.006554 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.072509 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/0.log" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.072558 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerStarted","Data":"6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.085981 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.101208 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.108866 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.108904 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.108913 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.108927 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.108937 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.109311 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.117577 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.127044 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:33Z\\\",\\\"message\\\":\\\"2026-01-22T16:28:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc\\\\n2026-01-22T16:28:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc to /host/opt/cni/bin/\\\\n2026-01-22T16:28:48Z [verbose] multus-daemon started\\\\n2026-01-22T16:28:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:29:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.134886 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.147849 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.160648 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.174629 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.188652 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.209195 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.211469 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.211499 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.211508 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.211520 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.211528 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.221382 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.234332 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.246922 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.259584 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.271937 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.285261 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.313738 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.313786 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.313817 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.313835 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.313846 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.415906 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.415943 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.415955 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.415970 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.415983 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.519173 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.519242 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.519262 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.519301 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.519342 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.621718 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.621769 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.621781 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.621818 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.621838 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.632098 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:46:10.112287152 +0000 UTC Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.633379 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.633456 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:35 crc kubenswrapper[4704]: E0122 16:29:35.633514 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:35 crc kubenswrapper[4704]: E0122 16:29:35.633572 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.647351 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.725306 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.725364 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.725414 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.725439 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.725457 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.827961 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.828037 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.828059 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.828102 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.828125 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.930694 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.930744 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.930756 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.930771 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:35 crc kubenswrapper[4704]: I0122 16:29:35.930783 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:35Z","lastTransitionTime":"2026-01-22T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.033369 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.033398 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.033406 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.033419 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.033427 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.135469 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.135526 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.135536 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.135551 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.135562 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.237281 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.237307 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.237315 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.237326 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.237335 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.340350 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.340378 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.340389 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.340402 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.340412 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.442345 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.442375 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.442383 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.442395 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.442405 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.544560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.544608 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.544619 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.544637 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.544648 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.632642 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:42:31.673759056 +0000 UTC Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.632758 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:36 crc kubenswrapper[4704]: E0122 16:29:36.632884 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.632762 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:36 crc kubenswrapper[4704]: E0122 16:29:36.633171 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.646502 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.646570 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.646590 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.646621 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.646644 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.748779 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.748818 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.748826 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.748840 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.748851 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.851488 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.851530 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.851544 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.851561 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.851571 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.954444 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.954527 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.954559 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.954590 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:36 crc kubenswrapper[4704]: I0122 16:29:36.954611 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:36Z","lastTransitionTime":"2026-01-22T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.056502 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.056541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.056551 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.056565 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.056577 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.158452 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.158495 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.158504 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.158522 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.158531 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.261845 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.261909 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.261928 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.261952 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.261969 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.364430 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.364483 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.364499 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.364522 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.364539 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.502211 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.502911 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.502946 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.502975 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.502995 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.605663 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.605946 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.606046 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.606156 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.606244 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.632991 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:56:04.487155965 +0000 UTC Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.633117 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:37 crc kubenswrapper[4704]: E0122 16:29:37.633208 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.633371 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:37 crc kubenswrapper[4704]: E0122 16:29:37.633546 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.648376 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.661057 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.674815 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.685786 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db12f584-d5e2-43f4-9513-74e9fb3b1f35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3107659da8eed6f0a85da86064deaeaf0101eea14efd6380f3aa8a2056674f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.699382 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.709174 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.709301 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.709379 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.709452 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.709526 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.716347 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.731657 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.748296 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.759494 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.771666 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.780066 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.788142 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.802598 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:33Z\\\",\\\"message\\\":\\\"2026-01-22T16:28:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc\\\\n2026-01-22T16:28:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc to /host/opt/cni/bin/\\\\n2026-01-22T16:28:48Z [verbose] multus-daemon started\\\\n2026-01-22T16:28:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:29:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.812355 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.812560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.812653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.812737 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.812632 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.812856 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.823439 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.833683 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.849228 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.857806 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.915181 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.915217 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.915226 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.915240 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:37 crc kubenswrapper[4704]: I0122 16:29:37.915248 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:37Z","lastTransitionTime":"2026-01-22T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.017984 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.018250 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.018312 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.018394 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.018479 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.121315 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.121347 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.121357 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.121370 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.121382 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.223931 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.223958 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.223970 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.223984 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.223995 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.329095 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.329195 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.329229 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.329253 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.329271 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.432423 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.432462 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.432470 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.432485 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.432494 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.534871 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.535188 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.535273 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.535394 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.535496 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.632945 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:38 crc kubenswrapper[4704]: E0122 16:29:38.633110 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.633320 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:58:12.084694672 +0000 UTC Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.633939 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:38 crc kubenswrapper[4704]: E0122 16:29:38.634068 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.637606 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.637646 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.637661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.637680 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.637696 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.739817 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.739857 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.739868 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.739882 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.739893 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.842382 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.842470 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.842489 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.842516 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.842532 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.944868 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.944906 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.944917 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.944931 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:38 crc kubenswrapper[4704]: I0122 16:29:38.944940 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:38Z","lastTransitionTime":"2026-01-22T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.047748 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.047829 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.047842 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.047860 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.047871 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.150970 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.151016 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.151026 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.151041 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.151052 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.253509 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.253550 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.253561 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.253575 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.253587 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.355777 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.355877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.355893 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.355914 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.355926 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.458206 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.458263 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.458275 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.458296 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.458308 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.560726 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.560764 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.560776 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.560817 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.560830 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.633673 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:44:45.553129372 +0000 UTC Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.633696 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.633803 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:39 crc kubenswrapper[4704]: E0122 16:29:39.633905 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:39 crc kubenswrapper[4704]: E0122 16:29:39.633998 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.663216 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.663265 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.663278 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.663295 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.663306 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.765827 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.765872 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.765906 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.765924 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.765935 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.868179 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.868229 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.868247 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.868268 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.868282 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.974035 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.974088 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.974103 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.974121 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:39 crc kubenswrapper[4704]: I0122 16:29:39.974134 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:39Z","lastTransitionTime":"2026-01-22T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.076121 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.076163 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.076173 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.076189 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.076200 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.179132 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.179177 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.179187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.179204 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.179218 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.281559 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.281607 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.281621 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.281638 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.281648 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.384089 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.384129 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.384141 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.384158 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.384170 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.463619 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.463932 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.464027 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.464140 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.464253 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.478808 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.483199 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.483250 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.483259 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.483275 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.483285 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.496617 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.499942 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.499975 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.499983 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.499997 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.500010 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.512746 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.516486 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.516530 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.516541 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.516559 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.516571 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.529318 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.533237 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.533368 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.533480 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.533586 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.533681 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.550007 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.550265 4704 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.552077 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.552114 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.552128 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.552144 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.552157 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.633890 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.633945 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:01:13.903000578 +0000 UTC Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.633911 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.634063 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:40 crc kubenswrapper[4704]: E0122 16:29:40.634209 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.655293 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.655340 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.655359 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.655383 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.655400 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.757353 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.757383 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.757391 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.757404 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.757412 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.860240 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.860275 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.860287 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.860302 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.860315 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.962473 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.962507 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.962517 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.962535 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:40 crc kubenswrapper[4704]: I0122 16:29:40.962546 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:40Z","lastTransitionTime":"2026-01-22T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.065166 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.065287 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.065310 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.065334 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.065350 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.167936 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.167975 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.167986 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.168003 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.168014 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.270560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.270885 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.270975 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.271049 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.271119 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.373716 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.373769 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.373784 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.373827 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.373842 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.476670 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.476916 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.477015 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.477115 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.477212 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.579695 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.579728 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.579736 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.579749 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.579758 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.633629 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.633939 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:41 crc kubenswrapper[4704]: E0122 16:29:41.634014 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.634140 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:28:33.424973354 +0000 UTC Jan 22 16:29:41 crc kubenswrapper[4704]: E0122 16:29:41.634162 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.682990 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.683046 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.683059 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.683077 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.683089 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.785402 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.785441 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.785451 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.785468 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.785480 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.888282 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.888332 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.888348 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.888369 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.888386 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.991004 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.991079 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.991101 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.991130 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:41 crc kubenswrapper[4704]: I0122 16:29:41.991149 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:41Z","lastTransitionTime":"2026-01-22T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.094275 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.094338 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.094399 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.094427 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.094449 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.197721 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.197777 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.197820 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.197845 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.197863 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.299938 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.299977 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.299989 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.300007 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.300019 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.402350 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.402377 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.402385 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.402399 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.402407 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.504506 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.504745 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.504878 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.504985 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.505089 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.609152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.609260 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.609293 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.609325 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.609347 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.633573 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.633626 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:42 crc kubenswrapper[4704]: E0122 16:29:42.633706 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:42 crc kubenswrapper[4704]: E0122 16:29:42.633849 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.634674 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:16:53.744077377 +0000 UTC Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.712500 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.712535 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.712545 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.712559 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.712568 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.814670 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.814725 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.814738 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.814757 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.814770 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.917695 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.917752 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.917767 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.917811 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:42 crc kubenswrapper[4704]: I0122 16:29:42.917825 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:42Z","lastTransitionTime":"2026-01-22T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.020371 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.020632 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.020703 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.020813 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.020902 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.123203 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.123482 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.123580 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.123678 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.123760 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.226213 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.226264 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.226279 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.226300 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.226316 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.328333 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.328365 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.328374 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.328388 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.328397 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.431169 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.431246 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.431270 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.431304 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.431327 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.534466 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.534520 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.534537 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.534560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.534572 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.633570 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.633599 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:43 crc kubenswrapper[4704]: E0122 16:29:43.633828 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:43 crc kubenswrapper[4704]: E0122 16:29:43.633961 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.635526 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:12:45.438756846 +0000 UTC Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.638035 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.638099 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.638124 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.638152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.638176 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.741354 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.741416 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.741433 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.741459 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.741477 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.844181 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.844250 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.844270 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.844298 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.844317 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.946738 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.946783 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.946813 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.946829 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:43 crc kubenswrapper[4704]: I0122 16:29:43.946840 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:43Z","lastTransitionTime":"2026-01-22T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.048577 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.048627 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.048634 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.048649 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.048658 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.151184 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.151255 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.151284 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.151307 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.151326 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.253606 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.253645 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.253655 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.253674 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.253687 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.356942 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.357250 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.357378 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.357506 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.357593 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.460001 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.460056 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.460072 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.460096 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.460112 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.562667 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.562702 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.562711 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.562725 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.562734 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.633331 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.633392 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:44 crc kubenswrapper[4704]: E0122 16:29:44.633997 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:44 crc kubenswrapper[4704]: E0122 16:29:44.634010 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.636582 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:47:38.266215365 +0000 UTC Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.664287 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.664429 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.664447 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.664508 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.664525 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.767905 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.767980 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.768008 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.768038 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.768064 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.870379 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.870632 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.870838 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.870962 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.871045 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.973923 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.973979 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.973999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.974019 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:44 crc kubenswrapper[4704]: I0122 16:29:44.974032 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:44Z","lastTransitionTime":"2026-01-22T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.076788 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.076888 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.076910 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.076941 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.077015 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.179223 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.179288 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.179311 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.179340 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.179361 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.282169 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.282210 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.282219 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.282232 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.282243 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.384911 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.384956 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.384964 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.384978 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.384988 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.487855 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.487913 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.487932 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.487957 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.487975 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.590771 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.590901 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.590926 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.590958 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.590980 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.633189 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.633266 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:45 crc kubenswrapper[4704]: E0122 16:29:45.633447 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:45 crc kubenswrapper[4704]: E0122 16:29:45.633614 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.636722 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:08:04.534597459 +0000 UTC Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.694388 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.694440 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.694452 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.694470 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.694482 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.797635 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.797670 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.797681 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.797697 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.797710 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.901054 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.901113 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.901129 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.901152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:45 crc kubenswrapper[4704]: I0122 16:29:45.901166 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:45Z","lastTransitionTime":"2026-01-22T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.003948 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.003981 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.003991 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.004005 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.004016 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.107606 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.107659 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.107674 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.107695 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.107713 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.210109 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.210146 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.210158 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.210174 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.210186 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.313220 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.313256 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.313267 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.313282 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.313293 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.415497 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.415529 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.415539 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.415554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.415564 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.518762 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.518868 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.518894 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.518926 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.518965 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.621773 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.621832 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.621844 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.621862 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.621874 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.633332 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.633343 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:46 crc kubenswrapper[4704]: E0122 16:29:46.633512 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:46 crc kubenswrapper[4704]: E0122 16:29:46.633662 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.637548 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:34:31.343312467 +0000 UTC Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.723727 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.723770 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.723780 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.723827 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.723839 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.826277 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.826310 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.826321 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.826335 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.826346 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.928653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.928689 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.928697 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.928710 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:46 crc kubenswrapper[4704]: I0122 16:29:46.928718 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:46Z","lastTransitionTime":"2026-01-22T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.032324 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.032401 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.032425 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.032455 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.032479 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.136086 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.136141 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.136152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.136169 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.136180 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.239336 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.239398 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.239423 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.239454 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.239477 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.342104 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.342477 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.342487 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.342501 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.342511 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.446083 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.446153 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.446170 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.446193 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.446212 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.549174 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.549223 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.549233 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.549247 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.549256 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.634140 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.634175 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:47 crc kubenswrapper[4704]: E0122 16:29:47.634319 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:47 crc kubenswrapper[4704]: E0122 16:29:47.634526 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.637664 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:29:21.980897766 +0000 UTC Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.651746 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.651782 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.651817 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.651829 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.651840 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.654545 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.671564 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.688805 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.706781 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.722418 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.736864 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.750956 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.755132 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.755263 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.755350 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.755436 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.755520 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.769392 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.804346 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db12f584-d5e2-43f4-9513-74e9fb3b1f35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3107659da8eed6f0a85da86064deaeaf0101eea14efd6380f3aa8a2056674f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.826042 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.843311 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.857666 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.857698 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.857706 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.857719 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.857729 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.858025 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.869565 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.882291 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.893925 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.902919 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.913676 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.929483 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:33Z\\\",\\\"message\\\":\\\"2026-01-22T16:28:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc\\\\n2026-01-22T16:28:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc to /host/opt/cni/bin/\\\\n2026-01-22T16:28:48Z [verbose] multus-daemon started\\\\n2026-01-22T16:28:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:29:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:47Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.960260 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.960531 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.960600 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.960704 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:47 crc kubenswrapper[4704]: I0122 16:29:47.960769 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:47Z","lastTransitionTime":"2026-01-22T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.062641 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.062684 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.062698 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.062717 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.062731 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.165486 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.165545 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.165565 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.165589 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.165605 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.269017 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.269144 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.269167 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.269196 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.269217 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.371912 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.371953 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.371965 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.371980 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.371992 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.474000 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.474255 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.474327 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.474413 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.474503 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.576843 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.576885 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.576897 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.576911 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.576919 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.633648 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.633756 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:48 crc kubenswrapper[4704]: E0122 16:29:48.634659 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:48 crc kubenswrapper[4704]: E0122 16:29:48.634772 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.635035 4704 scope.go:117] "RemoveContainer" containerID="2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.638478 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:07:35.105786462 +0000 UTC Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.680479 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.680531 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.680543 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.680560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.680573 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.782638 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.782707 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.782727 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.782751 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.782768 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.885341 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.885389 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.885405 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.885425 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.885440 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.988265 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.988314 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.988324 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.988340 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:48 crc kubenswrapper[4704]: I0122 16:29:48.988351 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:48Z","lastTransitionTime":"2026-01-22T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.090937 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.091075 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.091095 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.091120 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.091136 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.194700 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.194759 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.194781 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.194856 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.194877 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.297644 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.297704 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.297724 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.297753 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.297775 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.400310 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.400349 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.400359 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.400376 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.400386 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.502515 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.502572 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.502589 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.502610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.502624 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.605069 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.605154 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.605176 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.605204 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.605226 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.621644 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.621886 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:53.621868487 +0000 UTC m=+146.266415197 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.633594 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.633775 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.633883 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.634005 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.639507 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:12:32.553453435 +0000 UTC Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.707551 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.707593 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.707602 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.707617 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.707627 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.722446 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.722522 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.722563 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.722613 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722721 4704 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722839 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722845 4704 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722865 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722909 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722929 4704 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722865 4704 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722999 4704 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.722878 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:53.722847707 +0000 UTC m=+146.367394447 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.723068 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:53.723045272 +0000 UTC m=+146.367592012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.723100 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:53.723085903 +0000 UTC m=+146.367632643 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:49 crc kubenswrapper[4704]: E0122 16:29:49.723131 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 16:30:53.723120054 +0000 UTC m=+146.367666794 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.809876 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.809933 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.809950 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.809973 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.809993 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.912610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.912669 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.912685 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.912708 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:49 crc kubenswrapper[4704]: I0122 16:29:49.912725 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:49Z","lastTransitionTime":"2026-01-22T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.016048 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.016101 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.016123 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.016152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.016174 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.119564 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.119630 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.119653 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.119684 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.119707 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.124178 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/2.log" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.127503 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.128078 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.141433 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.151826 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.163566 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:33Z\\\",\\\"message\\\":\\\"2026-01-22T16:28:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc\\\\n2026-01-22T16:28:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc to /host/opt/cni/bin/\\\\n2026-01-22T16:28:48Z [verbose] multus-daemon started\\\\n2026-01-22T16:28:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:29:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.177651 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.188610 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.200340 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.214165 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.222734 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.222775 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.222784 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.222821 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.222832 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.236900 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.267566 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.280002 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.301695 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.314059 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.324992 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.325034 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.325046 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.325062 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.325074 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.328143 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.369553 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.381882 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.395766 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.408531 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.419204 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db12f584-d5e2-43f4-9513-74e9fb3b1f35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3107659da8eed6f0a85da86064deaeaf0101eea14efd6380f3aa8a2056674f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.427871 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.427943 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.427958 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.427979 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.428450 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.530743 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.530785 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.530815 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.530831 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.530843 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.574356 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.574391 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.574400 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.574412 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.574422 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.586012 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.590355 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.590389 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.590398 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.590413 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.590424 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.602349 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.606592 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.606634 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.606647 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.606672 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.606687 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.619114 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.623278 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.623314 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.623328 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.623354 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.623407 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.633085 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.633183 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.633261 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.633334 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.636462 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.640355 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:43:39.185266896 +0000 UTC Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.642406 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.642436 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.642449 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.642467 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.642478 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.656988 4704 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13eee035-d079-4087-986f-982a570291de\\\",\\\"systemUUID\\\":\\\"2e1f8319-6b24-40fc-94be-3f7f227a5746\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:50Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:50 crc kubenswrapper[4704]: E0122 16:29:50.657121 4704 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.659708 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.659738 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.659746 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.659762 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.659773 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.762627 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.762691 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.762711 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.762737 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.762755 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.864825 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.864880 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.864918 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.864947 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.864971 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.968530 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.968607 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.968621 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.968643 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:50 crc kubenswrapper[4704]: I0122 16:29:50.968659 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:50Z","lastTransitionTime":"2026-01-22T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.070927 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.070977 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.070990 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.071007 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.071062 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.132674 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/3.log" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.133624 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/2.log" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.136663 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" exitCode=1 Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.136713 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.136784 4704 scope.go:117] "RemoveContainer" containerID="2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.137426 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:29:51 crc kubenswrapper[4704]: E0122 16:29:51.137770 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.150664 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db12f584-d5e2-43f4-9513-74e9fb3b1f35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3107659da8eed6f0a85da86064deaeaf0101eea14efd6380f3aa8a2056674f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.163702 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.173298 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.173337 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.173347 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.173365 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.173377 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.179884 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.189659 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.201343 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.212174 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.221424 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.229436 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.240376 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:33Z\\\",\\\"message\\\":\\\"2026-01-22T16:28:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc\\\\n2026-01-22T16:28:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc to /host/opt/cni/bin/\\\\n2026-01-22T16:28:48Z [verbose] multus-daemon started\\\\n2026-01-22T16:28:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:29:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.249988 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.261518 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.272211 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.275631 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.275661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.275672 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.275688 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.275700 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.284389 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.300280 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d67ac87b4869892125e8a2878644a8eada16511e9e224c2791bb4c842289a32\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:17Z\\\",\\\"message\\\":\\\"ate)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0122 16:29:16.600625 6319 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler-operator/metrics\\\\\\\"}\\\\nI0122 16:29:16.601537 6319 services_controller.go:452] Built service openshift-oauth-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601545 6319 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-scheduler-operator for network=default : 13.799648ms\\\\nI0122 16:29:16.601558 6319 services_controller.go:453] Built service openshift-oauth-apiserver/api template LB for network=default: []services.LB{}\\\\nI0122 16:29:16.601565 6319 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0122 16:29:16.601571 6319 services_controller.go:454] Service openshift-oauth-apiserver/api for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0122 16:29:16.601582 6319 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\" \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0122 16:29:50.718231 6800 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nF0122 16:29:50.718253 6800 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.310434 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.321319 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.331868 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.343665 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:51Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.378621 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.378657 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.378668 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.378683 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.378694 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.481432 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.481469 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.481484 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.481502 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.481517 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.584647 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.584712 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.584738 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.584767 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.584843 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.633217 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.633296 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:51 crc kubenswrapper[4704]: E0122 16:29:51.633394 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:51 crc kubenswrapper[4704]: E0122 16:29:51.633521 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.640724 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:25:21.931943818 +0000 UTC Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.687575 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.687612 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.687627 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.687648 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.687663 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.790894 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.790942 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.790952 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.790968 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.790980 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.894127 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.894177 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.894187 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.894203 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.894216 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.998325 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.998379 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.998390 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.998409 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:51 crc kubenswrapper[4704]: I0122 16:29:51.998419 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:51Z","lastTransitionTime":"2026-01-22T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.101426 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.101483 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.101497 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.101518 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.101531 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.142672 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/3.log" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.146256 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:29:52 crc kubenswrapper[4704]: E0122 16:29:52.146454 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.158251 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.175169 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.189564 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d059ee4adef05c454e63271bf001a4790bc8a4b03dc0fedb030f61e0d6414c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e5f66ac9a7ace52350dd9ba331ca35da1db81ac1423c2bd5bfc51d4e1bcb12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.204557 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.204624 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.204638 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.204657 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.204687 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.210345 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce29525-000a-4c91-8765-67c0c3f1ae7e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:50Z\\\",\\\"message\\\":\\\" \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0122 16:29:50.718231 6800 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}\\\\nF0122 16:29:50.718253 6800 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:29:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hkqnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q8h4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.223042 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-92rrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"022e2512-8e2d-483f-a733-8681aad464a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ftjn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:29:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-92rrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.234912 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74333f63-3b57-480d-8d2d-f56e59231986\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd12682f1057098b5fc7285ca49f8cddec6155a3c4bdee08edf54a9b2e8891a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://054b4ab3fca5fe374dc8ffd3cd799a5b88a08b1d90514bc8d7fad8570567f9c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd501acc07d641d4716fe5864a10788348905c8b834a0ee47f5aba1688d9e2ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.249243 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e356bcc5d71c6fe69c4c2a69bc5bf82ec8ea99d62c909a75c040971f65128738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.261571 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.271699 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db12f584-d5e2-43f4-9513-74e9fb3b1f35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3107659da8eed6f0a85da86064deaeaf0101eea14efd6380f3aa8a2056674f69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2978546f1360904d8de82023ffc9bc1de9b780d7155b4e55f5bfa22b6a108236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.287912 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d30d8677-1d99-406b-af8d-fd0c5c7a643d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T16:28:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 16:28:40.099454 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 16:28:40.100869 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3308642497/tls.crt::/tmp/serving-cert-3308642497/tls.key\\\\\\\"\\\\nI0122 16:28:45.498406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 16:28:45.501207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 16:28:45.501227 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 16:28:45.501249 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 16:28:45.501256 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 16:28:45.506436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 16:28:45.506466 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 16:28:45.506478 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 16:28:45.506484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 16:28:45.506488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 16:28:45.506493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 16:28:45.506739 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 16:28:45.508875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.302472 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nndw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bea4f83-78aa-49a7-a98a-60045d7f4f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a4c411ddad0c6af10cc067d5d97b8d2adcdc21335c1f9b487a29726fe254c20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd8460b541c16718b13f5ffc75651f27aa6de9a9ef5e7288e51640ad59e04928\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f84048b0300643d9e221e4c5a83ba55301108516c95b22a4b343136744268f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc29fe8a86ecd0c86074f4fe7334b4452412bed41971636ed4ec9a5ef68cc07f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bad24ab32de656d1e7f95a4017dfc9f98d1b11a409c4545215c786ff6b79c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38f0df95238bb70183d62fbdb55fde25d458a241267bfa7876583b4a139f7913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://899e8d2cea134c4027ed6f803b0dac6eb61a6b131d2c70655e3ace9e234c67f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbl92\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nndw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.306767 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.306863 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.306881 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.306935 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.306953 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.315098 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8e25829-99af-4717-87f3-43a79b9d8c26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd08380da5027a7b1751e9e4ca06a549aa5563bdada40b43ed95cbfd4f602f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8z7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hsg8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.327603 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e1c055c-2596-4053-b9d1-fcc44c50e6e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe7208814673d3b50053cac08963840e56ff8963a28bc82a9181c5ca616fb133\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2f32c2dee5b629e65ee2e4f8010b0c1d57e4b2bd9d1e40c4a68047dbf143a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6trt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2xkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.343248 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecfb60fe-180d-4690-b004-fa39f7988778\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b66a5c3adf942c0e5e0dbf58ebe2bcd277f50a119c6ab101db1f9fba9352c3bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2e7b5589bb2113d11fbdc257c1917880a658a02e571e0a0c49eb349d4cb3e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://494f1ac2266edc1fd90fb835076945ec923de055f1ad6e9ca4f5354e79b353e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d95f9df14629124c73001e8ecf4cc0091fb4b4852782b09539fb387d939afa2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T16:28:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.360966 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfc5c442e26ae11eaa7c4e2dc2cf6a0688fb1879733a7900373495b8dcae4f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.374519 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ztlx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c93a4369-3f1a-4707-9e55-3968cfef2744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b576e62553e91ccf600f58e0b5ad5eef0d489b95220ab549019a4adabfd4546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqpkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ztlx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.384638 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mccb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9bb5fd98-0b3a-4412-a083-80d87ee360f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e249e47cfe643477e5d4a91c685ec2d077413110c7f31b99247a70d74fbaa6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx556\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mccb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.397373 4704 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-77bsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9357b7a7-d902-4f7e-97b9-b0a7871ec95e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T16:29:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T16:29:33Z\\\",\\\"message\\\":\\\"2026-01-22T16:28:48+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc\\\\n2026-01-22T16:28:48+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d391c0bf-2a05-43f5-a351-f96de21d87cc to /host/opt/cni/bin/\\\\n2026-01-22T16:28:48Z [verbose] multus-daemon started\\\\n2026-01-22T16:28:48Z [verbose] Readiness Indicator file check\\\\n2026-01-22T16:29:33Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T16:28:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fnz9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T16:28:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-77bsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T16:29:52Z is after 2025-08-24T17:21:41Z" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.409624 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.409701 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.409713 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.409734 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.409761 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.512554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.512589 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.512598 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.512610 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.512619 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.615025 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.615059 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.615067 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.615097 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.615107 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.633601 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.633681 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:52 crc kubenswrapper[4704]: E0122 16:29:52.633749 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:52 crc kubenswrapper[4704]: E0122 16:29:52.633969 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.641486 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:11:56.661942414 +0000 UTC Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.718601 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.718666 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.718676 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.718695 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.718707 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.820840 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.820893 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.820909 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.820935 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.820952 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.923584 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.923645 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.923663 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.923688 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:52 crc kubenswrapper[4704]: I0122 16:29:52.923713 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:52Z","lastTransitionTime":"2026-01-22T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.025814 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.025863 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.025886 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.025916 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.025939 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.128497 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.128542 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.128554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.128571 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.128584 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.231934 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.231993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.232006 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.232027 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.232039 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.334861 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.334896 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.334905 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.334922 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.334932 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.437501 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.437528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.437537 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.437548 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.437557 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.539503 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.539560 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.539571 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.539589 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.539619 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.633034 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.633148 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:53 crc kubenswrapper[4704]: E0122 16:29:53.633631 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:53 crc kubenswrapper[4704]: E0122 16:29:53.633833 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.641572 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:30:46.672294446 +0000 UTC Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.641941 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.642010 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.642033 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.642064 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.642088 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.654566 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.745765 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.745892 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.745916 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.745948 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.745972 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.848604 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.848661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.848673 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.848690 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.848703 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.951554 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.951607 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.951619 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.951633 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:53 crc kubenswrapper[4704]: I0122 16:29:53.951643 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:53Z","lastTransitionTime":"2026-01-22T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.055475 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.055529 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.055546 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.055571 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.055589 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.158502 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.158568 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.158584 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.158608 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.158626 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.261050 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.261094 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.261105 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.261124 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.261136 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.363959 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.363991 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.363999 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.364012 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.364020 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.467123 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.467162 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.467172 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.467192 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.467209 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.570045 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.570092 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.570101 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.570117 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.570126 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.633189 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:54 crc kubenswrapper[4704]: E0122 16:29:54.633354 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.633414 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:54 crc kubenswrapper[4704]: E0122 16:29:54.633671 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.642168 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:39:13.157644464 +0000 UTC Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.673105 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.673167 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.673177 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.673194 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.673203 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.775434 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.775477 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.775491 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.775509 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.775524 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.877528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.877564 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.877573 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.877586 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.877595 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.980167 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.980221 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.980236 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.980292 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:54 crc kubenswrapper[4704]: I0122 16:29:54.980309 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:54Z","lastTransitionTime":"2026-01-22T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.083270 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.083348 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.083379 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.083424 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.083447 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.185953 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.186013 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.186027 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.186047 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.186059 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.289241 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.289287 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.289296 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.289314 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.289325 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.392109 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.392156 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.392166 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.392182 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.392195 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.495370 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.495439 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.495460 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.495494 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.495512 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.598657 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.598985 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.599093 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.599163 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.599231 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.633202 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.633256 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:55 crc kubenswrapper[4704]: E0122 16:29:55.633763 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:55 crc kubenswrapper[4704]: E0122 16:29:55.634144 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.643128 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:27:47.361350861 +0000 UTC Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.701661 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.701724 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.701736 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.701760 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.701775 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.804952 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.805323 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.805568 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.805889 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.806141 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.908558 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.908841 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.908949 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.909019 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:55 crc kubenswrapper[4704]: I0122 16:29:55.909085 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:55Z","lastTransitionTime":"2026-01-22T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.012032 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.012393 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.012507 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.012593 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.012690 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.115139 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.115175 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.115186 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.115201 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.115211 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.218592 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.218654 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.218670 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.218694 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.218711 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.322268 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.322579 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.322756 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.322995 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.323154 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.431047 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.431132 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.431158 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.431189 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.431205 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.534190 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.534251 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.534269 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.534294 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.534312 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.632905 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.633152 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:56 crc kubenswrapper[4704]: E0122 16:29:56.633531 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:56 crc kubenswrapper[4704]: E0122 16:29:56.633702 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.637041 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.637105 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.637126 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.637156 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.637179 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.643863 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:13:39.572052605 +0000 UTC Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.740054 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.740112 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.740128 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.740153 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.740170 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.842741 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.842837 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.842857 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.842884 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.842904 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.946114 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.946193 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.946215 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.946246 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:56 crc kubenswrapper[4704]: I0122 16:29:56.946274 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:56Z","lastTransitionTime":"2026-01-22T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.049396 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.049504 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.049528 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.049557 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.049579 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.152040 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.152073 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.152081 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.152095 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.152103 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.255684 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.255767 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.255781 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.255830 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.255847 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.358978 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.359026 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.359038 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.359062 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.359078 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.462435 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.462514 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.462537 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.462567 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.462589 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.565535 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.565574 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.565585 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.565600 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.565614 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.633383 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.633438 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:57 crc kubenswrapper[4704]: E0122 16:29:57.633529 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:57 crc kubenswrapper[4704]: E0122 16:29:57.633607 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.644047 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:32:30.007345529 +0000 UTC Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.669652 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.669808 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.669830 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.669848 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.669860 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.685883 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.685862652 podStartE2EDuration="1m11.685862652s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.66747324 +0000 UTC m=+90.312019940" watchObservedRunningTime="2026-01-22 16:29:57.685862652 +0000 UTC m=+90.330409362" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.734028 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.734013449 podStartE2EDuration="4.734013449s" podCreationTimestamp="2026-01-22 16:29:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.733458304 +0000 UTC m=+90.378005014" watchObservedRunningTime="2026-01-22 16:29:57.734013449 +0000 UTC m=+90.378560149" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.756124 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.756105059 podStartE2EDuration="22.756105059s" podCreationTimestamp="2026-01-22 16:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.74417103 +0000 UTC m=+90.388717730" watchObservedRunningTime="2026-01-22 16:29:57.756105059 +0000 UTC m=+90.400651769" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.756562 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.756556842 podStartE2EDuration="1m12.756556842s" podCreationTimestamp="2026-01-22 16:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.755918034 +0000 UTC m=+90.400464744" watchObservedRunningTime="2026-01-22 16:29:57.756556842 +0000 UTC m=+90.401103552" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.772303 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.772349 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.772363 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.772381 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.772395 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.777784 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nndw6" podStartSLOduration=71.777772519 podStartE2EDuration="1m11.777772519s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.777574973 +0000 UTC m=+90.422121673" watchObservedRunningTime="2026-01-22 16:29:57.777772519 +0000 UTC m=+90.422319229" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.794054 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podStartSLOduration=71.794036753 podStartE2EDuration="1m11.794036753s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.794012913 +0000 UTC m=+90.438559613" watchObservedRunningTime="2026-01-22 16:29:57.794036753 +0000 UTC m=+90.438583453" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.808855 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2xkc" podStartSLOduration=71.808836359 podStartE2EDuration="1m11.808836359s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.806275141 +0000 UTC m=+90.450821841" watchObservedRunningTime="2026-01-22 16:29:57.808836359 +0000 UTC m=+90.453383059" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.841140 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.841122572 podStartE2EDuration="35.841122572s" podCreationTimestamp="2026-01-22 16:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.828545206 +0000 UTC m=+90.473091916" watchObservedRunningTime="2026-01-22 16:29:57.841122572 +0000 UTC m=+90.485669272" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.851422 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ztlx4" podStartSLOduration=71.851409137 podStartE2EDuration="1m11.851409137s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.850684358 +0000 UTC m=+90.495231058" watchObservedRunningTime="2026-01-22 16:29:57.851409137 +0000 UTC m=+90.495955827" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.860860 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mccb2" podStartSLOduration=71.860844099 podStartE2EDuration="1m11.860844099s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.860476679 +0000 UTC m=+90.505023379" watchObservedRunningTime="2026-01-22 16:29:57.860844099 +0000 UTC m=+90.505390799" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.875003 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.875300 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.875420 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.875538 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.875692 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.881436 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-77bsn" podStartSLOduration=71.881425659 podStartE2EDuration="1m11.881425659s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:29:57.881128352 +0000 UTC m=+90.525675062" watchObservedRunningTime="2026-01-22 16:29:57.881425659 +0000 UTC m=+90.525972349" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.978430 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.978483 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.978492 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.978507 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:57 crc kubenswrapper[4704]: I0122 16:29:57.978516 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:57Z","lastTransitionTime":"2026-01-22T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.080770 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.080821 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.080834 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.080848 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.080859 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.183198 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.183240 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.183249 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.183266 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.183279 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.286195 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.286250 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.286260 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.286276 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.286286 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.388954 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.389473 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.389553 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.389660 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.389738 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.493058 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.493130 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.493149 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.493175 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.493196 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.596129 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.596182 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.596194 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.596211 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.596223 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.632743 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:29:58 crc kubenswrapper[4704]: E0122 16:29:58.632890 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.632745 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:29:58 crc kubenswrapper[4704]: E0122 16:29:58.632972 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.646295 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:51:00.125159327 +0000 UTC Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.699776 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.699840 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.699852 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.699870 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.699882 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.802130 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.802173 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.802181 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.802196 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.802205 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.905182 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.905238 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.905249 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.905266 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:58 crc kubenswrapper[4704]: I0122 16:29:58.905284 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:58Z","lastTransitionTime":"2026-01-22T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.007877 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.007930 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.007942 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.007957 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.007969 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.111484 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.111535 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.111547 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.111569 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.111582 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.218741 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.218826 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.218849 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.218873 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.218891 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.323447 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.323542 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.323558 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.323579 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.323593 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.426272 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.426352 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.426367 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.426406 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.426423 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.529752 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.529813 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.529822 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.529837 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.529848 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.632177 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.632213 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.632225 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.632240 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.632251 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.632663 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.632722 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:29:59 crc kubenswrapper[4704]: E0122 16:29:59.632828 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:29:59 crc kubenswrapper[4704]: E0122 16:29:59.633025 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.646567 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:24:41.199209223 +0000 UTC Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.734716 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.734755 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.734765 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.734802 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.734820 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.837152 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.837188 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.837199 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.837214 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.837228 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.940688 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.940766 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.940786 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.940841 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:29:59 crc kubenswrapper[4704]: I0122 16:29:59.940895 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:29:59Z","lastTransitionTime":"2026-01-22T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.044413 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.044478 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.044498 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.044523 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.044542 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.149452 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.149522 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.149540 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.149566 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.149585 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.252990 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.253047 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.253067 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.253093 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.253110 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.355886 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.355928 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.355940 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.355956 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.355967 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.458874 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.458925 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.458936 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.459062 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.459077 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.561981 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.562031 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.562058 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.562082 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.562099 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.633369 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:00 crc kubenswrapper[4704]: E0122 16:30:00.633565 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.633658 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:00 crc kubenswrapper[4704]: E0122 16:30:00.633741 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.646815 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:47:02.051696171 +0000 UTC Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.665892 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.665978 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.665993 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.666015 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.666029 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.768760 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.768832 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.768850 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.768873 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.768885 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.871848 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.871898 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.871911 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.871927 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.871939 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.971852 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.971893 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.971904 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.971920 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 16:30:00 crc kubenswrapper[4704]: I0122 16:30:00.971931 4704 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T16:30:00Z","lastTransitionTime":"2026-01-22T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.012976 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln"] Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.013382 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.015139 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.015249 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.017904 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.018197 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.149115 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5fad649-4da6-4147-bb3b-5e84a521a97f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.149153 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5fad649-4da6-4147-bb3b-5e84a521a97f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.149190 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fad649-4da6-4147-bb3b-5e84a521a97f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.149220 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5fad649-4da6-4147-bb3b-5e84a521a97f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.149240 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5fad649-4da6-4147-bb3b-5e84a521a97f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250044 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5fad649-4da6-4147-bb3b-5e84a521a97f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250098 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fad649-4da6-4147-bb3b-5e84a521a97f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250117 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f5fad649-4da6-4147-bb3b-5e84a521a97f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250131 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5fad649-4da6-4147-bb3b-5e84a521a97f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250201 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5fad649-4da6-4147-bb3b-5e84a521a97f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250242 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5fad649-4da6-4147-bb3b-5e84a521a97f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250326 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f5fad649-4da6-4147-bb3b-5e84a521a97f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.250823 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f5fad649-4da6-4147-bb3b-5e84a521a97f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.256428 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5fad649-4da6-4147-bb3b-5e84a521a97f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.265236 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5fad649-4da6-4147-bb3b-5e84a521a97f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q4vln\" (UID: \"f5fad649-4da6-4147-bb3b-5e84a521a97f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.327536 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" Jan 22 16:30:01 crc kubenswrapper[4704]: W0122 16:30:01.342324 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5fad649_4da6_4147_bb3b_5e84a521a97f.slice/crio-ed7060c11267663e9e864da2ba8251697626f6063dd29bc0347c2857835ac0f3 WatchSource:0}: Error finding container ed7060c11267663e9e864da2ba8251697626f6063dd29bc0347c2857835ac0f3: Status 404 returned error can't find the container with id ed7060c11267663e9e864da2ba8251697626f6063dd29bc0347c2857835ac0f3 Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.633314 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:01 crc kubenswrapper[4704]: E0122 16:30:01.633424 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.633583 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:01 crc kubenswrapper[4704]: E0122 16:30:01.633627 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.647360 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:24:17.077781659 +0000 UTC Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.647428 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 16:30:01 crc kubenswrapper[4704]: I0122 16:30:01.655547 4704 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 16:30:02 crc kubenswrapper[4704]: I0122 16:30:02.173126 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" event={"ID":"f5fad649-4da6-4147-bb3b-5e84a521a97f","Type":"ContainerStarted","Data":"6b297d48f8b840acd2ff7310920e5b6801af7ff1fda42edf5d62df5566b7df6d"} Jan 22 16:30:02 crc kubenswrapper[4704]: I0122 16:30:02.173174 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" event={"ID":"f5fad649-4da6-4147-bb3b-5e84a521a97f","Type":"ContainerStarted","Data":"ed7060c11267663e9e864da2ba8251697626f6063dd29bc0347c2857835ac0f3"} Jan 22 16:30:02 crc kubenswrapper[4704]: I0122 16:30:02.190547 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q4vln" podStartSLOduration=76.19053091 podStartE2EDuration="1m16.19053091s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:02.189262826 +0000 UTC m=+94.833809526" watchObservedRunningTime="2026-01-22 16:30:02.19053091 +0000 UTC m=+94.835077630" Jan 22 16:30:02 crc kubenswrapper[4704]: I0122 16:30:02.632736 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:02 crc kubenswrapper[4704]: I0122 16:30:02.632806 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:02 crc kubenswrapper[4704]: E0122 16:30:02.632882 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:02 crc kubenswrapper[4704]: E0122 16:30:02.632951 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:03 crc kubenswrapper[4704]: I0122 16:30:03.632976 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:03 crc kubenswrapper[4704]: I0122 16:30:03.632996 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:03 crc kubenswrapper[4704]: E0122 16:30:03.633098 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:03 crc kubenswrapper[4704]: E0122 16:30:03.633188 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:04 crc kubenswrapper[4704]: I0122 16:30:04.632867 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:04 crc kubenswrapper[4704]: I0122 16:30:04.632898 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:04 crc kubenswrapper[4704]: E0122 16:30:04.633150 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:04 crc kubenswrapper[4704]: E0122 16:30:04.633381 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:05 crc kubenswrapper[4704]: I0122 16:30:05.096895 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:05 crc kubenswrapper[4704]: E0122 16:30:05.096999 4704 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:05 crc kubenswrapper[4704]: E0122 16:30:05.097064 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs podName:022e2512-8e2d-483f-a733-8681aad464a3 nodeName:}" failed. No retries permitted until 2026-01-22 16:31:09.097048746 +0000 UTC m=+161.741595446 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs") pod "network-metrics-daemon-92rrv" (UID: "022e2512-8e2d-483f-a733-8681aad464a3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 16:30:05 crc kubenswrapper[4704]: I0122 16:30:05.633732 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:05 crc kubenswrapper[4704]: I0122 16:30:05.633887 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:05 crc kubenswrapper[4704]: E0122 16:30:05.634007 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:05 crc kubenswrapper[4704]: E0122 16:30:05.634137 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:05 crc kubenswrapper[4704]: I0122 16:30:05.634705 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:30:05 crc kubenswrapper[4704]: E0122 16:30:05.634943 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:30:06 crc kubenswrapper[4704]: I0122 16:30:06.633121 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:06 crc kubenswrapper[4704]: I0122 16:30:06.633166 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:06 crc kubenswrapper[4704]: E0122 16:30:06.633284 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:06 crc kubenswrapper[4704]: E0122 16:30:06.633360 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:07 crc kubenswrapper[4704]: I0122 16:30:07.633418 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:07 crc kubenswrapper[4704]: I0122 16:30:07.633526 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:07 crc kubenswrapper[4704]: E0122 16:30:07.636012 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:07 crc kubenswrapper[4704]: E0122 16:30:07.636164 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:08 crc kubenswrapper[4704]: I0122 16:30:08.632886 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:08 crc kubenswrapper[4704]: I0122 16:30:08.632906 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:08 crc kubenswrapper[4704]: E0122 16:30:08.633433 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:08 crc kubenswrapper[4704]: E0122 16:30:08.633310 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:09 crc kubenswrapper[4704]: I0122 16:30:09.633111 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:09 crc kubenswrapper[4704]: I0122 16:30:09.634008 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:09 crc kubenswrapper[4704]: E0122 16:30:09.634109 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:09 crc kubenswrapper[4704]: E0122 16:30:09.634200 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:10 crc kubenswrapper[4704]: I0122 16:30:10.633759 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:10 crc kubenswrapper[4704]: I0122 16:30:10.633834 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:10 crc kubenswrapper[4704]: E0122 16:30:10.634080 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:10 crc kubenswrapper[4704]: E0122 16:30:10.634157 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:11 crc kubenswrapper[4704]: I0122 16:30:11.633431 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:11 crc kubenswrapper[4704]: I0122 16:30:11.633430 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:11 crc kubenswrapper[4704]: E0122 16:30:11.633621 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:11 crc kubenswrapper[4704]: E0122 16:30:11.633894 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:12 crc kubenswrapper[4704]: I0122 16:30:12.633292 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:12 crc kubenswrapper[4704]: E0122 16:30:12.633694 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:12 crc kubenswrapper[4704]: I0122 16:30:12.633465 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:12 crc kubenswrapper[4704]: E0122 16:30:12.633779 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:13 crc kubenswrapper[4704]: I0122 16:30:13.633074 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:13 crc kubenswrapper[4704]: I0122 16:30:13.633191 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:13 crc kubenswrapper[4704]: E0122 16:30:13.633269 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:13 crc kubenswrapper[4704]: E0122 16:30:13.633411 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:14 crc kubenswrapper[4704]: I0122 16:30:14.633580 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:14 crc kubenswrapper[4704]: I0122 16:30:14.633622 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:14 crc kubenswrapper[4704]: E0122 16:30:14.633785 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:14 crc kubenswrapper[4704]: E0122 16:30:14.633901 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:15 crc kubenswrapper[4704]: I0122 16:30:15.633496 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:15 crc kubenswrapper[4704]: I0122 16:30:15.633655 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:15 crc kubenswrapper[4704]: E0122 16:30:15.633837 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:15 crc kubenswrapper[4704]: E0122 16:30:15.634033 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:16 crc kubenswrapper[4704]: I0122 16:30:16.632727 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:16 crc kubenswrapper[4704]: E0122 16:30:16.632874 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:16 crc kubenswrapper[4704]: I0122 16:30:16.632959 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:16 crc kubenswrapper[4704]: E0122 16:30:16.633144 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:17 crc kubenswrapper[4704]: I0122 16:30:17.633057 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:17 crc kubenswrapper[4704]: I0122 16:30:17.633088 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:17 crc kubenswrapper[4704]: E0122 16:30:17.636503 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:17 crc kubenswrapper[4704]: E0122 16:30:17.636871 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:17 crc kubenswrapper[4704]: I0122 16:30:17.636971 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:30:17 crc kubenswrapper[4704]: E0122 16:30:17.637288 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:30:18 crc kubenswrapper[4704]: I0122 16:30:18.633315 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:18 crc kubenswrapper[4704]: I0122 16:30:18.633351 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:18 crc kubenswrapper[4704]: E0122 16:30:18.633435 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:18 crc kubenswrapper[4704]: E0122 16:30:18.633529 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:19 crc kubenswrapper[4704]: I0122 16:30:19.632882 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:19 crc kubenswrapper[4704]: I0122 16:30:19.632916 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:19 crc kubenswrapper[4704]: E0122 16:30:19.633440 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:19 crc kubenswrapper[4704]: E0122 16:30:19.633648 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.233648 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/1.log" Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.234532 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/0.log" Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.234616 4704 generic.go:334] "Generic (PLEG): container finished" podID="9357b7a7-d902-4f7e-97b9-b0a7871ec95e" containerID="6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35" exitCode=1 Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.234663 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerDied","Data":"6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35"} Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.234749 4704 scope.go:117] "RemoveContainer" containerID="4c2f8e6d222ab7e3db0d099c2f04137b15c84745dd71d1099b8986353df697a7" Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.235641 4704 scope.go:117] "RemoveContainer" containerID="6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35" Jan 22 16:30:20 crc kubenswrapper[4704]: E0122 16:30:20.236346 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-77bsn_openshift-multus(9357b7a7-d902-4f7e-97b9-b0a7871ec95e)\"" pod="openshift-multus/multus-77bsn" podUID="9357b7a7-d902-4f7e-97b9-b0a7871ec95e" Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.632961 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:20 crc kubenswrapper[4704]: I0122 16:30:20.632958 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:20 crc kubenswrapper[4704]: E0122 16:30:20.633140 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:20 crc kubenswrapper[4704]: E0122 16:30:20.633309 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:21 crc kubenswrapper[4704]: I0122 16:30:21.239471 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/1.log" Jan 22 16:30:21 crc kubenswrapper[4704]: I0122 16:30:21.633310 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:21 crc kubenswrapper[4704]: I0122 16:30:21.633399 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:21 crc kubenswrapper[4704]: E0122 16:30:21.633444 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:21 crc kubenswrapper[4704]: E0122 16:30:21.633599 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:22 crc kubenswrapper[4704]: I0122 16:30:22.633370 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:22 crc kubenswrapper[4704]: I0122 16:30:22.633377 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:22 crc kubenswrapper[4704]: E0122 16:30:22.633647 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:22 crc kubenswrapper[4704]: E0122 16:30:22.633872 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:23 crc kubenswrapper[4704]: I0122 16:30:23.633562 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:23 crc kubenswrapper[4704]: I0122 16:30:23.633591 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:23 crc kubenswrapper[4704]: E0122 16:30:23.633787 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:23 crc kubenswrapper[4704]: E0122 16:30:23.633946 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:24 crc kubenswrapper[4704]: I0122 16:30:24.633182 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:24 crc kubenswrapper[4704]: I0122 16:30:24.633217 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:24 crc kubenswrapper[4704]: E0122 16:30:24.633407 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:24 crc kubenswrapper[4704]: E0122 16:30:24.633563 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:25 crc kubenswrapper[4704]: I0122 16:30:25.633697 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:25 crc kubenswrapper[4704]: I0122 16:30:25.633717 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:25 crc kubenswrapper[4704]: E0122 16:30:25.633916 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:25 crc kubenswrapper[4704]: E0122 16:30:25.633954 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:26 crc kubenswrapper[4704]: I0122 16:30:26.633142 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:26 crc kubenswrapper[4704]: I0122 16:30:26.633358 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:26 crc kubenswrapper[4704]: E0122 16:30:26.633519 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:26 crc kubenswrapper[4704]: E0122 16:30:26.634108 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:27 crc kubenswrapper[4704]: E0122 16:30:27.582179 4704 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 16:30:27 crc kubenswrapper[4704]: I0122 16:30:27.632986 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:27 crc kubenswrapper[4704]: E0122 16:30:27.633099 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:27 crc kubenswrapper[4704]: I0122 16:30:27.634339 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:27 crc kubenswrapper[4704]: E0122 16:30:27.635253 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:27 crc kubenswrapper[4704]: E0122 16:30:27.728817 4704 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:30:28 crc kubenswrapper[4704]: I0122 16:30:28.637038 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:28 crc kubenswrapper[4704]: I0122 16:30:28.637924 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:30:28 crc kubenswrapper[4704]: I0122 16:30:28.637614 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:28 crc kubenswrapper[4704]: E0122 16:30:28.638083 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-q8h4x_openshift-ovn-kubernetes(fce29525-000a-4c91-8765-67c0c3f1ae7e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" Jan 22 16:30:28 crc kubenswrapper[4704]: E0122 16:30:28.638083 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:28 crc kubenswrapper[4704]: E0122 16:30:28.640092 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:29 crc kubenswrapper[4704]: I0122 16:30:29.633875 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:29 crc kubenswrapper[4704]: I0122 16:30:29.634950 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:29 crc kubenswrapper[4704]: E0122 16:30:29.635076 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:29 crc kubenswrapper[4704]: E0122 16:30:29.635299 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:30 crc kubenswrapper[4704]: I0122 16:30:30.632753 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:30 crc kubenswrapper[4704]: I0122 16:30:30.632786 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:30 crc kubenswrapper[4704]: E0122 16:30:30.632992 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:30 crc kubenswrapper[4704]: E0122 16:30:30.633107 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:31 crc kubenswrapper[4704]: I0122 16:30:31.633542 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:31 crc kubenswrapper[4704]: I0122 16:30:31.633620 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:31 crc kubenswrapper[4704]: E0122 16:30:31.633786 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:31 crc kubenswrapper[4704]: E0122 16:30:31.633953 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:32 crc kubenswrapper[4704]: I0122 16:30:32.633474 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:32 crc kubenswrapper[4704]: E0122 16:30:32.633654 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:32 crc kubenswrapper[4704]: I0122 16:30:32.633738 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:32 crc kubenswrapper[4704]: E0122 16:30:32.634076 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:32 crc kubenswrapper[4704]: I0122 16:30:32.634254 4704 scope.go:117] "RemoveContainer" containerID="6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35" Jan 22 16:30:32 crc kubenswrapper[4704]: E0122 16:30:32.730081 4704 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:30:33 crc kubenswrapper[4704]: I0122 16:30:33.281516 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/1.log" Jan 22 16:30:33 crc kubenswrapper[4704]: I0122 16:30:33.281581 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerStarted","Data":"6c4b1bdc0188a97a87e635a079219bea7a676bb95436b887abb9fc74e596b72d"} Jan 22 16:30:33 crc kubenswrapper[4704]: I0122 16:30:33.633631 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:33 crc kubenswrapper[4704]: I0122 16:30:33.633692 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:33 crc kubenswrapper[4704]: E0122 16:30:33.633845 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:33 crc kubenswrapper[4704]: E0122 16:30:33.633973 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:34 crc kubenswrapper[4704]: I0122 16:30:34.632856 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:34 crc kubenswrapper[4704]: I0122 16:30:34.632951 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:34 crc kubenswrapper[4704]: E0122 16:30:34.633196 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:34 crc kubenswrapper[4704]: E0122 16:30:34.633662 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:35 crc kubenswrapper[4704]: I0122 16:30:35.633210 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:35 crc kubenswrapper[4704]: I0122 16:30:35.633268 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:35 crc kubenswrapper[4704]: E0122 16:30:35.633450 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:35 crc kubenswrapper[4704]: E0122 16:30:35.633506 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:36 crc kubenswrapper[4704]: I0122 16:30:36.632756 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:36 crc kubenswrapper[4704]: I0122 16:30:36.632886 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:36 crc kubenswrapper[4704]: E0122 16:30:36.632945 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:36 crc kubenswrapper[4704]: E0122 16:30:36.633124 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:37 crc kubenswrapper[4704]: I0122 16:30:37.632964 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:37 crc kubenswrapper[4704]: I0122 16:30:37.633014 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:37 crc kubenswrapper[4704]: E0122 16:30:37.633870 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:37 crc kubenswrapper[4704]: E0122 16:30:37.634082 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:37 crc kubenswrapper[4704]: E0122 16:30:37.731437 4704 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:30:38 crc kubenswrapper[4704]: I0122 16:30:38.633676 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:38 crc kubenswrapper[4704]: I0122 16:30:38.633725 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:38 crc kubenswrapper[4704]: E0122 16:30:38.633899 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:38 crc kubenswrapper[4704]: E0122 16:30:38.634057 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:39 crc kubenswrapper[4704]: I0122 16:30:39.632782 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:39 crc kubenswrapper[4704]: I0122 16:30:39.632924 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:39 crc kubenswrapper[4704]: E0122 16:30:39.632995 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:39 crc kubenswrapper[4704]: E0122 16:30:39.633114 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:40 crc kubenswrapper[4704]: I0122 16:30:40.632675 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:40 crc kubenswrapper[4704]: I0122 16:30:40.632724 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:40 crc kubenswrapper[4704]: E0122 16:30:40.634296 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:40 crc kubenswrapper[4704]: E0122 16:30:40.634910 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:41 crc kubenswrapper[4704]: I0122 16:30:41.632968 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:41 crc kubenswrapper[4704]: E0122 16:30:41.634107 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:41 crc kubenswrapper[4704]: I0122 16:30:41.633079 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:41 crc kubenswrapper[4704]: E0122 16:30:41.634396 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:42 crc kubenswrapper[4704]: I0122 16:30:42.633302 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:42 crc kubenswrapper[4704]: I0122 16:30:42.633356 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:42 crc kubenswrapper[4704]: E0122 16:30:42.633438 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:42 crc kubenswrapper[4704]: E0122 16:30:42.633579 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:42 crc kubenswrapper[4704]: E0122 16:30:42.734102 4704 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:30:43 crc kubenswrapper[4704]: I0122 16:30:43.633404 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:43 crc kubenswrapper[4704]: E0122 16:30:43.633633 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:43 crc kubenswrapper[4704]: I0122 16:30:43.633690 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:43 crc kubenswrapper[4704]: E0122 16:30:43.634401 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:43 crc kubenswrapper[4704]: I0122 16:30:43.634822 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:30:44 crc kubenswrapper[4704]: I0122 16:30:44.633226 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:44 crc kubenswrapper[4704]: E0122 16:30:44.633677 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:44 crc kubenswrapper[4704]: I0122 16:30:44.633408 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:44 crc kubenswrapper[4704]: E0122 16:30:44.634136 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.361726 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/3.log" Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.367711 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerStarted","Data":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.368248 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.543023 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podStartSLOduration=119.542999804 podStartE2EDuration="1m59.542999804s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:45.399939057 +0000 UTC m=+138.044485757" watchObservedRunningTime="2026-01-22 16:30:45.542999804 +0000 UTC m=+138.187546504" Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.543629 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-92rrv"] Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.543728 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:45 crc kubenswrapper[4704]: E0122 16:30:45.543885 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.633195 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:45 crc kubenswrapper[4704]: E0122 16:30:45.633361 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:45 crc kubenswrapper[4704]: I0122 16:30:45.633589 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:45 crc kubenswrapper[4704]: E0122 16:30:45.633652 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:46 crc kubenswrapper[4704]: I0122 16:30:46.633475 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:46 crc kubenswrapper[4704]: E0122 16:30:46.633592 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 16:30:47 crc kubenswrapper[4704]: I0122 16:30:47.632737 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:47 crc kubenswrapper[4704]: I0122 16:30:47.632766 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:47 crc kubenswrapper[4704]: E0122 16:30:47.633710 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 16:30:47 crc kubenswrapper[4704]: I0122 16:30:47.633755 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:47 crc kubenswrapper[4704]: E0122 16:30:47.633932 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 16:30:47 crc kubenswrapper[4704]: E0122 16:30:47.634089 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-92rrv" podUID="022e2512-8e2d-483f-a733-8681aad464a3" Jan 22 16:30:48 crc kubenswrapper[4704]: I0122 16:30:48.633087 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:48 crc kubenswrapper[4704]: I0122 16:30:48.634937 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 16:30:48 crc kubenswrapper[4704]: I0122 16:30:48.635609 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.086822 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.087216 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.633682 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.633711 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.633756 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.635490 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.636779 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.637078 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 16:30:49 crc kubenswrapper[4704]: I0122 16:30:49.637082 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.423669 4704 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.475391 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.475899 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.476902 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cpq2f"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.477832 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.478456 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.479151 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.479783 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.480249 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.481096 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hgdwt"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.481777 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.485898 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lvsjg"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.486319 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.488023 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.488272 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.489399 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.489967 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490046 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490216 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490332 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490511 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490558 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490684 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490729 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490897 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490959 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.490966 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.491393 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.498487 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498507 4704 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498565 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498502 4704 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.498607 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.498623 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498605 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498633 4704 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498677 4704 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498705 4704 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498716 4704 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498728 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498741 4704 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.498762 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498765 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498778 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.498715 4704 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498719 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498849 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.498709 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.498631 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.498599 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.498954 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.499119 4704 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.499145 4704 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.499155 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.499177 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.499742 4704 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.499767 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: W0122 16:30:52.499836 4704 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 22 16:30:52 crc kubenswrapper[4704]: E0122 16:30:52.499855 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.499912 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.500038 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l6zs2"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.500672 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.501566 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.502057 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.504070 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.505860 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8v4fz"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.506790 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.507751 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.508476 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.509094 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-khgwd"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.509751 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.510009 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.510043 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.510186 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kzftk"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.511108 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.511939 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-92qrn"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.512611 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2np4w"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.512771 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.513108 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2np4w" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.513416 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xvsbg"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.513983 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.514081 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.514333 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.532086 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.538576 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2pkc8"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.559735 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.560664 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.560870 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.561720 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.563032 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.564856 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.564923 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.565046 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.565392 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2qcrw"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.566019 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.566265 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.566323 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.566289 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.566811 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.567094 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.567269 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.569228 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.569427 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.569671 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.569854 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.569997 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.570146 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.570287 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.572724 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.573193 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.573685 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.573863 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.573956 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.574061 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.574149 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.574420 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.574515 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.574657 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.574754 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.574880 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.575167 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.576135 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.576184 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.576307 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.576531 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.576716 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.576861 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577012 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577128 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577390 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577485 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577507 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577610 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577709 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577842 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.577974 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.578074 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.578148 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.578077 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.578681 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.579253 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.581447 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.581721 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.581863 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.581995 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.582112 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.582226 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.582258 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.582343 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.582417 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.582445 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.582491 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.583518 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.583638 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gllz9"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.584388 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.584389 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.585048 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.586231 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.586805 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.586887 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.587453 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.589850 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cpq2f"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.590900 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.591053 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.591361 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.593480 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.598317 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lvsjg"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.605785 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/890108ab-72eb-4eed-8d33-5abf5494b6d5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.606088 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkd66\" (UniqueName: \"kubernetes.io/projected/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-kube-api-access-fkd66\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.620985 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.621584 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4pc\" (UniqueName: \"kubernetes.io/projected/822794ef-a29d-43bb-8e01-ab9aa44ed0be-kube-api-access-7x4pc\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.624529 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.627634 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.631974 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632255 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kn9m\" (UniqueName: \"kubernetes.io/projected/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-kube-api-access-5kn9m\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632385 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632422 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-etcd-serving-ca\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632449 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ps25\" (UniqueName: \"kubernetes.io/projected/890108ab-72eb-4eed-8d33-5abf5494b6d5-kube-api-access-2ps25\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632499 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632526 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632555 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w28j\" (UniqueName: \"kubernetes.io/projected/f3395d1f-e400-4f01-87c2-7321f583d6d3-kube-api-access-9w28j\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632590 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97a55eb5-6536-4b57-ba38-39e6739d8188-audit-dir\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632618 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/890108ab-72eb-4eed-8d33-5abf5494b6d5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632655 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zgdw\" (UniqueName: \"kubernetes.io/projected/aef72b7b-ce60-41c1-903a-16ebddec4d6f-kube-api-access-4zgdw\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632701 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-audit-dir\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632735 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-images\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632756 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-serving-cert\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632846 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632873 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-oauth-serving-cert\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632894 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632940 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.632960 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-config\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633015 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633043 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8q7m\" (UniqueName: \"kubernetes.io/projected/08014b73-1836-45da-a3fa-8a05ad57ebad-kube-api-access-p8q7m\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633088 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633113 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-config\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633137 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-service-ca-bundle\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633162 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-service-ca\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633182 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/890108ab-72eb-4eed-8d33-5abf5494b6d5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633206 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/822794ef-a29d-43bb-8e01-ab9aa44ed0be-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633247 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633279 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-audit\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633319 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633345 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-encryption-config\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633424 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633452 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-etcd-client\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633471 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-config\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633504 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-encryption-config\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633524 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633549 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633570 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd037191-da3d-4f66-9d51-bd18a3ba0082-serving-cert\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633588 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnl24\" (UniqueName: \"kubernetes.io/projected/cd037191-da3d-4f66-9d51-bd18a3ba0082-kube-api-access-mnl24\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633606 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-config\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633658 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633684 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-serving-cert\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633729 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-audit-policies\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633753 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhfm\" (UniqueName: \"kubernetes.io/projected/97a55eb5-6536-4b57-ba38-39e6739d8188-kube-api-access-trhfm\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633811 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-dir\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633836 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-serving-cert\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633869 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633894 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-image-import-ca\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633914 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/97a55eb5-6536-4b57-ba38-39e6739d8188-node-pullsecrets\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633955 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.633979 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-etcd-client\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634008 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634033 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-oauth-config\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634053 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-trusted-ca-bundle\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634079 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634098 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634118 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vb87\" (UniqueName: \"kubernetes.io/projected/24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8-kube-api-access-7vb87\") pod \"downloads-7954f5f757-2np4w\" (UID: \"24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8\") " pod="openshift-console/downloads-7954f5f757-2np4w" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634172 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-policies\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634226 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-config\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634251 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xcv\" (UniqueName: \"kubernetes.io/projected/5ba602c9-6155-46ca-baa1-0cfcd35cab16-kube-api-access-58xcv\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634272 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3395d1f-e400-4f01-87c2-7321f583d6d3-config\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634296 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3395d1f-e400-4f01-87c2-7321f583d6d3-serving-cert\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634330 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-serving-cert\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.634353 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3395d1f-e400-4f01-87c2-7321f583d6d3-trusted-ca\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.636944 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.637710 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.637735 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hgdwt"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.637756 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.638210 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.638655 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.640712 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.640823 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.644000 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.647332 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.648025 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.648596 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.649043 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.649320 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cswhh"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.649662 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.650158 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.650426 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.650688 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.653177 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.653235 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.653187 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.654481 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.656410 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.659832 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lx7sw"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.660387 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.660736 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.661227 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r5dtt"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.661826 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.662017 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.664893 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kxvpl"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.665918 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.665949 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.666040 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.672240 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.672535 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.675830 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.675874 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l6zs2"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.676658 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2np4w"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.677559 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-92qrn"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.679849 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.680422 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.682854 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.683886 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.685353 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kzftk"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.685627 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8v4fz"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.686711 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.689470 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.695829 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.697879 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.702113 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.703951 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-whsbg"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.706724 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.708725 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fxzwk"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.709527 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.709631 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.711736 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2qcrw"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.713763 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.728517 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.729391 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.729831 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.730816 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.731636 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lx7sw"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735267 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735303 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-etcd-client\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735328 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-config\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735346 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-encryption-config\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735363 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735381 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735399 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd037191-da3d-4f66-9d51-bd18a3ba0082-serving-cert\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735414 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnl24\" (UniqueName: \"kubernetes.io/projected/cd037191-da3d-4f66-9d51-bd18a3ba0082-kube-api-access-mnl24\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735431 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-config\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735457 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735483 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-serving-cert\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735508 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-audit-policies\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735530 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trhfm\" (UniqueName: \"kubernetes.io/projected/97a55eb5-6536-4b57-ba38-39e6739d8188-kube-api-access-trhfm\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735551 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-dir\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735572 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-serving-cert\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735592 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735606 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-image-import-ca\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735619 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/97a55eb5-6536-4b57-ba38-39e6739d8188-node-pullsecrets\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735636 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735651 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-etcd-client\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735665 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735681 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-oauth-config\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735694 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-trusted-ca-bundle\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735710 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735727 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735743 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vb87\" (UniqueName: \"kubernetes.io/projected/24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8-kube-api-access-7vb87\") pod \"downloads-7954f5f757-2np4w\" (UID: \"24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8\") " pod="openshift-console/downloads-7954f5f757-2np4w" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735759 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-policies\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735780 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-config\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735831 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xcv\" (UniqueName: \"kubernetes.io/projected/5ba602c9-6155-46ca-baa1-0cfcd35cab16-kube-api-access-58xcv\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735847 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3395d1f-e400-4f01-87c2-7321f583d6d3-config\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735861 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3395d1f-e400-4f01-87c2-7321f583d6d3-serving-cert\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735879 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-serving-cert\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735894 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3395d1f-e400-4f01-87c2-7321f583d6d3-trusted-ca\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735909 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/890108ab-72eb-4eed-8d33-5abf5494b6d5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735924 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkd66\" (UniqueName: \"kubernetes.io/projected/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-kube-api-access-fkd66\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.735941 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4pc\" (UniqueName: \"kubernetes.io/projected/822794ef-a29d-43bb-8e01-ab9aa44ed0be-kube-api-access-7x4pc\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738298 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kn9m\" (UniqueName: \"kubernetes.io/projected/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-kube-api-access-5kn9m\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738337 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738357 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-etcd-serving-ca\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738374 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ps25\" (UniqueName: \"kubernetes.io/projected/890108ab-72eb-4eed-8d33-5abf5494b6d5-kube-api-access-2ps25\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738399 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738417 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738433 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w28j\" (UniqueName: \"kubernetes.io/projected/f3395d1f-e400-4f01-87c2-7321f583d6d3-kube-api-access-9w28j\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738448 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97a55eb5-6536-4b57-ba38-39e6739d8188-audit-dir\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738465 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/890108ab-72eb-4eed-8d33-5abf5494b6d5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738483 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zgdw\" (UniqueName: \"kubernetes.io/projected/aef72b7b-ce60-41c1-903a-16ebddec4d6f-kube-api-access-4zgdw\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738499 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-audit-dir\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738514 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-images\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738530 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-serving-cert\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738558 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738573 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-oauth-serving-cert\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738590 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738606 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738622 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-config\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738640 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738656 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8q7m\" (UniqueName: \"kubernetes.io/projected/08014b73-1836-45da-a3fa-8a05ad57ebad-kube-api-access-p8q7m\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738672 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738687 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-config\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738703 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-service-ca-bundle\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738719 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-service-ca\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738736 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/890108ab-72eb-4eed-8d33-5abf5494b6d5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738753 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/822794ef-a29d-43bb-8e01-ab9aa44ed0be-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738768 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738783 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-audit\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738819 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738836 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-encryption-config\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738949 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-config\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.739027 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-config\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.739370 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.739452 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97a55eb5-6536-4b57-ba38-39e6739d8188-audit-dir\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.740422 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3395d1f-e400-4f01-87c2-7321f583d6d3-config\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.741674 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.742327 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-encryption-config\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.742331 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-serving-cert\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.743413 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.743547 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/890108ab-72eb-4eed-8d33-5abf5494b6d5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.743652 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-audit-dir\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.743982 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-etcd-client\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.744771 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.744859 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.744906 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-serving-cert\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.736240 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.736245 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-config\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.745297 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.745505 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.746293 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-image-import-ca\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.746293 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97a55eb5-6536-4b57-ba38-39e6739d8188-etcd-client\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.746362 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/97a55eb5-6536-4b57-ba38-39e6739d8188-node-pullsecrets\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.746431 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3395d1f-e400-4f01-87c2-7321f583d6d3-serving-cert\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.736927 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.746632 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-serving-cert\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.746905 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd037191-da3d-4f66-9d51-bd18a3ba0082-service-ca-bundle\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.747037 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-etcd-serving-ca\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.737583 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-audit-policies\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.737837 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.747426 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/97a55eb5-6536-4b57-ba38-39e6739d8188-audit\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.747437 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-serving-cert\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.747876 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.748318 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/890108ab-72eb-4eed-8d33-5abf5494b6d5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.748573 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-service-ca\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.748611 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.748872 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-encryption-config\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.738296 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-dir\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749042 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-config\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749136 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-oauth-serving-cert\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749323 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749499 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3395d1f-e400-4f01-87c2-7321f583d6d3-trusted-ca\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749590 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749830 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-policies\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749965 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.749981 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-trusted-ca-bundle\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.750279 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.754146 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.754173 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd037191-da3d-4f66-9d51-bd18a3ba0082-serving-cert\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.754180 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.754324 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.754336 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cswhh"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.754362 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-oauth-config\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.756419 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2pkc8"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.756804 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-khgwd"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.757736 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xvsbg"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.757743 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.758992 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.760172 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kxvpl"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.761056 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r5dtt"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.762063 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kbfs9"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.762909 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.763171 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-whsbg"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.764192 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kbfs9"] Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.778565 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.798097 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.817482 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.837945 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.859267 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.878451 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.898414 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.918049 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.937926 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.957736 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.977990 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 16:30:52 crc kubenswrapper[4704]: I0122 16:30:52.998483 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.057647 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.077647 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.099237 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.118721 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.138503 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.160029 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.178365 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.198084 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.217295 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.238539 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.258204 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.278486 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.299266 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.319077 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.339175 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.358787 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.378928 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.398177 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.418670 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.438431 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.458876 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.478207 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.498880 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.519183 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.539075 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.558857 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.579124 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.599179 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.619072 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.638865 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.649106 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.649382 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:32:55.649341458 +0000 UTC m=+268.293888228 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.657135 4704 request.go:700] Waited for 1.01318931s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.659464 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.679174 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.698681 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.719174 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.738437 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.744383 4704 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.744440 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-images podName:822794ef-a29d-43bb-8e01-ab9aa44ed0be nodeName:}" failed. No retries permitted until 2026-01-22 16:30:54.244424805 +0000 UTC m=+146.888971505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-images") pod "machine-api-operator-5694c8668f-hgdwt" (UID: "822794ef-a29d-43bb-8e01-ab9aa44ed0be") : failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.746806 4704 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.746851 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-config podName:822794ef-a29d-43bb-8e01-ab9aa44ed0be nodeName:}" failed. No retries permitted until 2026-01-22 16:30:54.246841078 +0000 UTC m=+146.891387778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-config") pod "machine-api-operator-5694c8668f-hgdwt" (UID: "822794ef-a29d-43bb-8e01-ab9aa44ed0be") : failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.746913 4704 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.747004 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822794ef-a29d-43bb-8e01-ab9aa44ed0be-machine-api-operator-tls podName:822794ef-a29d-43bb-8e01-ab9aa44ed0be nodeName:}" failed. No retries permitted until 2026-01-22 16:30:54.246979271 +0000 UTC m=+146.891526011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/822794ef-a29d-43bb-8e01-ab9aa44ed0be-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-hgdwt" (UID: "822794ef-a29d-43bb-8e01-ab9aa44ed0be") : failed to sync secret cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.747009 4704 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.747096 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca podName:08014b73-1836-45da-a3fa-8a05ad57ebad nodeName:}" failed. No retries permitted until 2026-01-22 16:30:54.247075414 +0000 UTC m=+146.891622144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca") pod "controller-manager-879f6c89f-lvsjg" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad") : failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.749066 4704 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.749107 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles podName:08014b73-1836-45da-a3fa-8a05ad57ebad nodeName:}" failed. No retries permitted until 2026-01-22 16:30:54.249095126 +0000 UTC m=+146.893641826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles") pod "controller-manager-879f6c89f-lvsjg" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad") : failed to sync configmap cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.749124 4704 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: E0122 16:30:53.749146 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert podName:08014b73-1836-45da-a3fa-8a05ad57ebad nodeName:}" failed. No retries permitted until 2026-01-22 16:30:54.249139247 +0000 UTC m=+146.893685947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert") pod "controller-manager-879f6c89f-lvsjg" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad") : failed to sync secret cache: timed out waiting for the condition Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.750575 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.750645 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.750705 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.750736 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.751897 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.754442 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.754944 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.755518 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.758108 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.778631 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.798856 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.818535 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.838669 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.851176 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.859021 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.867942 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.878707 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.898858 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.919151 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.938923 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.958847 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 16:30:53 crc kubenswrapper[4704]: I0122 16:30:53.980613 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.000783 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.018239 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.039177 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.053741 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.057834 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.077882 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.098754 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 16:30:54 crc kubenswrapper[4704]: W0122 16:30:54.103469 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-8320edf5a3f90402a38ce3f095e128b662d95d42bca0405235756786380d3280 WatchSource:0}: Error finding container 8320edf5a3f90402a38ce3f095e128b662d95d42bca0405235756786380d3280: Status 404 returned error can't find the container with id 8320edf5a3f90402a38ce3f095e128b662d95d42bca0405235756786380d3280 Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.118919 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.138124 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.157786 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.178090 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.203882 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 16:30:54 crc kubenswrapper[4704]: W0122 16:30:54.203993 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-58501d2d2426672cdb261890e1c2f90a505726d66b638dd7ad8df7f5b29a7c45 WatchSource:0}: Error finding container 58501d2d2426672cdb261890e1c2f90a505726d66b638dd7ad8df7f5b29a7c45: Status 404 returned error can't find the container with id 58501d2d2426672cdb261890e1c2f90a505726d66b638dd7ad8df7f5b29a7c45 Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.224302 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.237941 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.258255 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.258326 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-images\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.258345 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.258368 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-config\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.258389 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/822794ef-a29d-43bb-8e01-ab9aa44ed0be-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.258405 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.258449 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.278337 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.298927 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 16:30:54 crc kubenswrapper[4704]: W0122 16:30:54.301826 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-58f5f29e047deb60372315a42ede476c1366625dc8e36bf8bb7c5ab31bd9fea8 WatchSource:0}: Error finding container 58f5f29e047deb60372315a42ede476c1366625dc8e36bf8bb7c5ab31bd9fea8: Status 404 returned error can't find the container with id 58f5f29e047deb60372315a42ede476c1366625dc8e36bf8bb7c5ab31bd9fea8 Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.318657 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.339331 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.361178 4704 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.378362 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.397851 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.402045 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"58f5f29e047deb60372315a42ede476c1366625dc8e36bf8bb7c5ab31bd9fea8"} Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.403256 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"992c3caedac25e5ccb2ffb6ca6ab63640e64f39384a337a36de564901a61091a"} Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.403284 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"58501d2d2426672cdb261890e1c2f90a505726d66b638dd7ad8df7f5b29a7c45"} Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.404596 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"04d70888bede8b113cede37cc3369817bfe659c7ea23957a029683066e40a36f"} Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.404670 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8320edf5a3f90402a38ce3f095e128b662d95d42bca0405235756786380d3280"} Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.404841 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.418730 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.438681 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.484175 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trhfm\" (UniqueName: \"kubernetes.io/projected/97a55eb5-6536-4b57-ba38-39e6739d8188-kube-api-access-trhfm\") pod \"apiserver-76f77b778f-8v4fz\" (UID: \"97a55eb5-6536-4b57-ba38-39e6739d8188\") " pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.504584 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnl24\" (UniqueName: \"kubernetes.io/projected/cd037191-da3d-4f66-9d51-bd18a3ba0082-kube-api-access-mnl24\") pod \"authentication-operator-69f744f599-cpq2f\" (UID: \"cd037191-da3d-4f66-9d51-bd18a3ba0082\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.513079 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xcv\" (UniqueName: \"kubernetes.io/projected/5ba602c9-6155-46ca-baa1-0cfcd35cab16-kube-api-access-58xcv\") pod \"console-f9d7485db-khgwd\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.538142 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.544376 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zgdw\" (UniqueName: \"kubernetes.io/projected/aef72b7b-ce60-41c1-903a-16ebddec4d6f-kube-api-access-4zgdw\") pod \"oauth-openshift-558db77b4-l6zs2\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.575522 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kn9m\" (UniqueName: \"kubernetes.io/projected/dff255df-bf8a-498d-b3f6-4f8e65a7b6fc-kube-api-access-5kn9m\") pod \"openshift-config-operator-7777fb866f-kzftk\" (UID: \"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.584713 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.600122 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.615917 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkd66\" (UniqueName: \"kubernetes.io/projected/25a91c52-a0f3-43ea-b8e5-4bd074ef16b0-kube-api-access-fkd66\") pod \"apiserver-7bbb656c7d-tbg6j\" (UID: \"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.616115 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.624888 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/890108ab-72eb-4eed-8d33-5abf5494b6d5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.640162 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ps25\" (UniqueName: \"kubernetes.io/projected/890108ab-72eb-4eed-8d33-5abf5494b6d5-kube-api-access-2ps25\") pod \"cluster-image-registry-operator-dc59b4c8b-mlsxz\" (UID: \"890108ab-72eb-4eed-8d33-5abf5494b6d5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.676295 4704 request.go:700] Waited for 1.926969432s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.684076 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w28j\" (UniqueName: \"kubernetes.io/projected/f3395d1f-e400-4f01-87c2-7321f583d6d3-kube-api-access-9w28j\") pod \"console-operator-58897d9998-92qrn\" (UID: \"f3395d1f-e400-4f01-87c2-7321f583d6d3\") " pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.696820 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vb87\" (UniqueName: \"kubernetes.io/projected/24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8-kube-api-access-7vb87\") pod \"downloads-7954f5f757-2np4w\" (UID: \"24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8\") " pod="openshift-console/downloads-7954f5f757-2np4w" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.699213 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.718180 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.739630 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.742675 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-khgwd"] Jan 22 16:30:54 crc kubenswrapper[4704]: W0122 16:30:54.755749 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba602c9_6155_46ca_baa1_0cfcd35cab16.slice/crio-164a1094c520e39c5d6eb2c6b5b2a002a0c818bda5a777af43f41cbc090212a3 WatchSource:0}: Error finding container 164a1094c520e39c5d6eb2c6b5b2a002a0c818bda5a777af43f41cbc090212a3: Status 404 returned error can't find the container with id 164a1094c520e39c5d6eb2c6b5b2a002a0c818bda5a777af43f41cbc090212a3 Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.758258 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.785217 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.790594 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8v4fz"] Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.798351 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.818725 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.821750 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kzftk"] Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.827885 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" Jan 22 16:30:54 crc kubenswrapper[4704]: W0122 16:30:54.829516 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddff255df_bf8a_498d_b3f6_4f8e65a7b6fc.slice/crio-e02a2829e7013f27a6d045569623ea2143b0b215b77d87f6f350e3c74de504e6 WatchSource:0}: Error finding container e02a2829e7013f27a6d045569623ea2143b0b215b77d87f6f350e3c74de504e6: Status 404 returned error can't find the container with id e02a2829e7013f27a6d045569623ea2143b0b215b77d87f6f350e3c74de504e6 Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.835901 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/822794ef-a29d-43bb-8e01-ab9aa44ed0be-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.837935 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.842360 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-images\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.858277 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.862236 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.863134 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-cpq2f"] Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867433 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kjl\" (UniqueName: \"kubernetes.io/projected/40a01e3d-81aa-4444-93d8-c24228829b34-kube-api-access-b7kjl\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867490 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ded330b-1278-4aea-8eb7-711847e9a54e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867519 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-config\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867566 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-certificates\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867588 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa83c3a2-0e3f-4396-8693-69a92bf8a423-auth-proxy-config\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867607 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa83c3a2-0e3f-4396-8693-69a92bf8a423-config\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867653 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ded330b-1278-4aea-8eb7-711847e9a54e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867670 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40a01e3d-81aa-4444-93d8-c24228829b34-serving-cert\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867685 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7f7j\" (UniqueName: \"kubernetes.io/projected/fa83c3a2-0e3f-4396-8693-69a92bf8a423-kube-api-access-d7f7j\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867715 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867753 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m9gs\" (UniqueName: \"kubernetes.io/projected/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-kube-api-access-7m9gs\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867769 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa83c3a2-0e3f-4396-8693-69a92bf8a423-machine-approver-tls\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867812 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-config\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867833 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-ca\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867853 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvj2j\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-kube-api-access-nvj2j\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867891 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-client-ca\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867916 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-client\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867933 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-trusted-ca\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867966 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.867986 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.868004 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4lm\" (UniqueName: \"kubernetes.io/projected/64c8e38f-52cb-4101-b631-177fc6ed9086-kube-api-access-qp4lm\") pod \"cluster-samples-operator-665b6dd947-w8qrd\" (UID: \"64c8e38f-52cb-4101-b631-177fc6ed9086\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.868022 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-bound-sa-token\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: E0122 16:30:54.868618 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:55.368585486 +0000 UTC m=+148.013132186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.868037 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-tls\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.868780 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-service-ca\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.868812 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/64c8e38f-52cb-4101-b631-177fc6ed9086-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w8qrd\" (UID: \"64c8e38f-52cb-4101-b631-177fc6ed9086\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.868830 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzqqg\" (UniqueName: \"kubernetes.io/projected/5816a839-8a48-4e39-ae5e-82df31d282df-kube-api-access-kzqqg\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.868853 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5816a839-8a48-4e39-ae5e-82df31d282df-serving-cert\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: W0122 16:30:54.872038 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd037191_da3d_4f66_9d51_bd18a3ba0082.slice/crio-0cbe8e6401a63d8afd65ddb33d6e7f1b00637c0ebfe9421bb9ca0f3a947ba511 WatchSource:0}: Error finding container 0cbe8e6401a63d8afd65ddb33d6e7f1b00637c0ebfe9421bb9ca0f3a947ba511: Status 404 returned error can't find the container with id 0cbe8e6401a63d8afd65ddb33d6e7f1b00637c0ebfe9421bb9ca0f3a947ba511 Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.886759 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.889660 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.897702 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.903182 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.907056 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2np4w" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.919904 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.920597 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.943431 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.958180 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.972285 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.972481 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d48829-9085-45ca-bf9c-cc90d68a94a3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j286m\" (UID: \"c2d48829-9085-45ca-bf9c-cc90d68a94a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:54 crc kubenswrapper[4704]: E0122 16:30:54.972515 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:55.472480122 +0000 UTC m=+148.117026822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.972560 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qv4\" (UniqueName: \"kubernetes.io/projected/a406634b-d850-4e1f-af04-f1ea77244ce1-kube-api-access-p7qv4\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973031 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f5fb5c-84e5-483b-9e21-5a7849856d41-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973083 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8tr\" (UniqueName: \"kubernetes.io/projected/888365e6-5672-42f7-ba73-de140fe8ea0a-kube-api-access-dh8tr\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973105 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8786f1-65c2-4086-9e36-b040560dcdd4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973125 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whl5f\" (UniqueName: \"kubernetes.io/projected/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-kube-api-access-whl5f\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973146 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/29788853-d1f5-46e3-af8c-963fa9d4fef4-certs\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973203 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-config\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973228 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-default-certificate\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973608 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w9rk\" (UniqueName: \"kubernetes.io/projected/caa82913-e147-40d4-b5d6-c162427bbf32-kube-api-access-4w9rk\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973641 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-metrics-tls\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973673 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/29788853-d1f5-46e3-af8c-963fa9d4fef4-node-bootstrap-token\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973743 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9128be3c-7611-4a51-b085-33b4019a0336-tmpfs\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973761 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48d333bd-5cb1-47e5-ad50-3d17246d36fe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.973776 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-plugins-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974156 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gm4s\" (UniqueName: \"kubernetes.io/projected/29788853-d1f5-46e3-af8c-963fa9d4fef4-kube-api-access-6gm4s\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974189 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/888365e6-5672-42f7-ba73-de140fe8ea0a-config-volume\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974270 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-trusted-ca\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974287 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-csi-data-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974304 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxwn\" (UniqueName: \"kubernetes.io/projected/509d0e75-5373-44d4-9053-14d595587d05-kube-api-access-zhxwn\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974323 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xl8f\" (UniqueName: \"kubernetes.io/projected/48d333bd-5cb1-47e5-ad50-3d17246d36fe-kube-api-access-6xl8f\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974343 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-config\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974362 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/27ee8df2-66e3-4de7-a2c3-c0687e535125-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4p2x6\" (UID: \"27ee8df2-66e3-4de7-a2c3-c0687e535125\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.974970 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-config\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975078 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-bound-sa-token\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975157 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48d333bd-5cb1-47e5-ad50-3d17246d36fe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975371 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975441 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-tls\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975584 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-service-ca\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975640 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzqqg\" (UniqueName: \"kubernetes.io/projected/5816a839-8a48-4e39-ae5e-82df31d282df-kube-api-access-kzqqg\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975776 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/64c8e38f-52cb-4101-b631-177fc6ed9086-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w8qrd\" (UID: \"64c8e38f-52cb-4101-b631-177fc6ed9086\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.975945 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-trusted-ca\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976022 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5816a839-8a48-4e39-ae5e-82df31d282df-serving-cert\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976046 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-config-volume\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976078 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh994\" (UniqueName: \"kubernetes.io/projected/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-kube-api-access-qh994\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976095 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9128be3c-7611-4a51-b085-33b4019a0336-apiservice-cert\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976112 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-srv-cert\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976127 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/509d0e75-5373-44d4-9053-14d595587d05-signing-key\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976152 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-mountpoint-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976188 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a406634b-d850-4e1f-af04-f1ea77244ce1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976205 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r5dtt\" (UID: \"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976224 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8786f1-65c2-4086-9e36-b040560dcdd4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976241 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d54766-7f56-4fbc-acf2-0193dc9bf8c1-metrics-tls\") pod \"dns-operator-744455d44c-2qcrw\" (UID: \"70d54766-7f56-4fbc-acf2-0193dc9bf8c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976273 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65ebbe77-876f-45fd-8baf-2d375e7e1774-metrics-tls\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976290 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9128be3c-7611-4a51-b085-33b4019a0336-webhook-cert\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976311 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc91464-d549-47b8-a428-605eaa51a21e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976350 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-certificates\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976368 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/caa82913-e147-40d4-b5d6-c162427bbf32-srv-cert\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976384 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/caa82913-e147-40d4-b5d6-c162427bbf32-profile-collector-cert\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976413 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65ebbe77-876f-45fd-8baf-2d375e7e1774-trusted-ca\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976421 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-service-ca\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976429 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzslj\" (UniqueName: \"kubernetes.io/projected/8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0-kube-api-access-hzslj\") pod \"multus-admission-controller-857f4d67dd-r5dtt\" (UID: \"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976464 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvhw\" (UniqueName: \"kubernetes.io/projected/624441d9-c4a5-4642-b5fb-07b54e9f40e0-kube-api-access-cqvhw\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976489 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ded330b-1278-4aea-8eb7-711847e9a54e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976504 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a406634b-d850-4e1f-af04-f1ea77244ce1-proxy-tls\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976521 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7f7j\" (UniqueName: \"kubernetes.io/projected/fa83c3a2-0e3f-4396-8693-69a92bf8a423-kube-api-access-d7f7j\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976548 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppj6\" (UniqueName: \"kubernetes.io/projected/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-kube-api-access-gppj6\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976569 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82f5fb5c-84e5-483b-9e21-5a7849856d41-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.976620 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8786f1-65c2-4086-9e36-b040560dcdd4-config\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977258 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ded330b-1278-4aea-8eb7-711847e9a54e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977603 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfdae196-c821-4f78-9191-890d25ca0e54-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977640 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55af32b-969b-4bec-b0b4-49a1cacf5753-config\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977667 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977693 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m9gs\" (UniqueName: \"kubernetes.io/projected/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-kube-api-access-7m9gs\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977716 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa83c3a2-0e3f-4396-8693-69a92bf8a423-machine-approver-tls\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977755 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jvf\" (UniqueName: \"kubernetes.io/projected/65ebbe77-876f-45fd-8baf-2d375e7e1774-kube-api-access-j7jvf\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977826 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/509d0e75-5373-44d4-9053-14d595587d05-signing-cabundle\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977856 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/888365e6-5672-42f7-ba73-de140fe8ea0a-secret-volume\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977877 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4nh\" (UniqueName: \"kubernetes.io/projected/a30726df-cfa8-4da0-9aa6-419437441379-kube-api-access-fd4nh\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977925 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-ca\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977949 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp25d\" (UniqueName: \"kubernetes.io/projected/70d54766-7f56-4fbc-acf2-0193dc9bf8c1-kube-api-access-dp25d\") pod \"dns-operator-744455d44c-2qcrw\" (UID: \"70d54766-7f56-4fbc-acf2-0193dc9bf8c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.977995 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvj2j\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-kube-api-access-nvj2j\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.978120 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-certificates\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.978578 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-client-ca\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.985984 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-client\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986077 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-socket-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986132 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khr5z\" (UniqueName: \"kubernetes.io/projected/752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f-kube-api-access-khr5z\") pod \"migrator-59844c95c7-57dhz\" (UID: \"752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986170 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnbcs\" (UniqueName: \"kubernetes.io/projected/27ee8df2-66e3-4de7-a2c3-c0687e535125-kube-api-access-rnbcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-4p2x6\" (UID: \"27ee8df2-66e3-4de7-a2c3-c0687e535125\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986192 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-ca\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986246 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55af32b-969b-4bec-b0b4-49a1cacf5753-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986280 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f5fb5c-84e5-483b-9e21-5a7849856d41-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986332 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07493bb4-1b2a-4770-8a0f-67ea302818c4-cert\") pod \"ingress-canary-kbfs9\" (UID: \"07493bb4-1b2a-4770-8a0f-67ea302818c4\") " pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986361 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/278370ba-36fe-40ff-8719-19b42b0357be-service-ca-bundle\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986426 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986452 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986485 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4lm\" (UniqueName: \"kubernetes.io/projected/64c8e38f-52cb-4101-b631-177fc6ed9086-kube-api-access-qp4lm\") pod \"cluster-samples-operator-665b6dd947-w8qrd\" (UID: \"64c8e38f-52cb-4101-b631-177fc6ed9086\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986519 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cfdae196-c821-4f78-9191-890d25ca0e54-proxy-tls\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986568 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc91464-d549-47b8-a428-605eaa51a21e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986600 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-registration-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986627 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-stats-auth\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986653 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b55af32b-969b-4bec-b0b4-49a1cacf5753-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986696 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-serving-cert\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986745 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-client-ca\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986753 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp725\" (UniqueName: \"kubernetes.io/projected/4fc91464-d549-47b8-a428-605eaa51a21e-kube-api-access-kp725\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986891 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65ebbe77-876f-45fd-8baf-2d375e7e1774-bound-sa-token\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986926 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkpw\" (UniqueName: \"kubernetes.io/projected/cfdae196-c821-4f78-9191-890d25ca0e54-kube-api-access-tfkpw\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.986993 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cg9x\" (UniqueName: \"kubernetes.io/projected/07493bb4-1b2a-4770-8a0f-67ea302818c4-kube-api-access-8cg9x\") pod \"ingress-canary-kbfs9\" (UID: \"07493bb4-1b2a-4770-8a0f-67ea302818c4\") " pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987023 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qcwg\" (UniqueName: \"kubernetes.io/projected/9128be3c-7611-4a51-b085-33b4019a0336-kube-api-access-8qcwg\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987055 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987100 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kjl\" (UniqueName: \"kubernetes.io/projected/40a01e3d-81aa-4444-93d8-c24228829b34-kube-api-access-b7kjl\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987130 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfqx\" (UniqueName: \"kubernetes.io/projected/c2d48829-9085-45ca-bf9c-cc90d68a94a3-kube-api-access-hwfqx\") pod \"package-server-manager-789f6589d5-j286m\" (UID: \"c2d48829-9085-45ca-bf9c-cc90d68a94a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987163 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-metrics-certs\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987189 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa83c3a2-0e3f-4396-8693-69a92bf8a423-machine-approver-tls\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987243 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-config\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987295 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ded330b-1278-4aea-8eb7-711847e9a54e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987338 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa83c3a2-0e3f-4396-8693-69a92bf8a423-auth-proxy-config\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987368 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfpf\" (UniqueName: \"kubernetes.io/projected/278370ba-36fe-40ff-8719-19b42b0357be-kube-api-access-thfpf\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987402 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa83c3a2-0e3f-4396-8693-69a92bf8a423-config\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987432 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a406634b-d850-4e1f-af04-f1ea77244ce1-images\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987501 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.987532 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40a01e3d-81aa-4444-93d8-c24228829b34-serving-cert\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.992013 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5816a839-8a48-4e39-ae5e-82df31d282df-serving-cert\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.993975 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-config\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.994888 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40a01e3d-81aa-4444-93d8-c24228829b34-etcd-client\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.998370 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.998641 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/64c8e38f-52cb-4101-b631-177fc6ed9086-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w8qrd\" (UID: \"64c8e38f-52cb-4101-b631-177fc6ed9086\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:54 crc kubenswrapper[4704]: E0122 16:30:54.999306 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:55.49927938 +0000 UTC m=+148.143826080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:54 crc kubenswrapper[4704]: I0122 16:30:54.999544 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.000227 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa83c3a2-0e3f-4396-8693-69a92bf8a423-config\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.000371 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ded330b-1278-4aea-8eb7-711847e9a54e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.000654 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa83c3a2-0e3f-4396-8693-69a92bf8a423-auth-proxy-config\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.001206 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40a01e3d-81aa-4444-93d8-c24228829b34-serving-cert\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.001204 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.004630 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.010399 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-tls\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.012057 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.021427 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8q7m\" (UniqueName: \"kubernetes.io/projected/08014b73-1836-45da-a3fa-8a05ad57ebad-kube-api-access-p8q7m\") pod \"controller-manager-879f6c89f-lvsjg\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.022092 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.030103 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/822794ef-a29d-43bb-8e01-ab9aa44ed0be-config\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.039735 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.062618 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.066138 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4pc\" (UniqueName: \"kubernetes.io/projected/822794ef-a29d-43bb-8e01-ab9aa44ed0be-kube-api-access-7x4pc\") pod \"machine-api-operator-5694c8668f-hgdwt\" (UID: \"822794ef-a29d-43bb-8e01-ab9aa44ed0be\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.089953 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090579 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090627 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d48829-9085-45ca-bf9c-cc90d68a94a3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j286m\" (UID: \"c2d48829-9085-45ca-bf9c-cc90d68a94a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090651 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7qv4\" (UniqueName: \"kubernetes.io/projected/a406634b-d850-4e1f-af04-f1ea77244ce1-kube-api-access-p7qv4\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090675 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f5fb5c-84e5-483b-9e21-5a7849856d41-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090697 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8tr\" (UniqueName: \"kubernetes.io/projected/888365e6-5672-42f7-ba73-de140fe8ea0a-kube-api-access-dh8tr\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090718 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/29788853-d1f5-46e3-af8c-963fa9d4fef4-certs\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090740 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8786f1-65c2-4086-9e36-b040560dcdd4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090760 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whl5f\" (UniqueName: \"kubernetes.io/projected/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-kube-api-access-whl5f\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090817 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-default-certificate\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090842 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w9rk\" (UniqueName: \"kubernetes.io/projected/caa82913-e147-40d4-b5d6-c162427bbf32-kube-api-access-4w9rk\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090861 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-metrics-tls\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090884 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9128be3c-7611-4a51-b085-33b4019a0336-tmpfs\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090905 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/29788853-d1f5-46e3-af8c-963fa9d4fef4-node-bootstrap-token\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090925 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48d333bd-5cb1-47e5-ad50-3d17246d36fe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090946 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-plugins-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090969 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/888365e6-5672-42f7-ba73-de140fe8ea0a-config-volume\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.090989 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gm4s\" (UniqueName: \"kubernetes.io/projected/29788853-d1f5-46e3-af8c-963fa9d4fef4-kube-api-access-6gm4s\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091017 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xl8f\" (UniqueName: \"kubernetes.io/projected/48d333bd-5cb1-47e5-ad50-3d17246d36fe-kube-api-access-6xl8f\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091040 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-csi-data-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091064 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxwn\" (UniqueName: \"kubernetes.io/projected/509d0e75-5373-44d4-9053-14d595587d05-kube-api-access-zhxwn\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091099 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-config\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091124 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/27ee8df2-66e3-4de7-a2c3-c0687e535125-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4p2x6\" (UID: \"27ee8df2-66e3-4de7-a2c3-c0687e535125\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091191 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48d333bd-5cb1-47e5-ad50-3d17246d36fe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091215 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091258 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh994\" (UniqueName: \"kubernetes.io/projected/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-kube-api-access-qh994\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091282 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9128be3c-7611-4a51-b085-33b4019a0336-apiservice-cert\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091302 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-config-volume\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091324 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-mountpoint-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091344 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a406634b-d850-4e1f-af04-f1ea77244ce1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091366 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-srv-cert\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091385 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/509d0e75-5373-44d4-9053-14d595587d05-signing-key\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091408 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r5dtt\" (UID: \"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091427 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8786f1-65c2-4086-9e36-b040560dcdd4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091451 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d54766-7f56-4fbc-acf2-0193dc9bf8c1-metrics-tls\") pod \"dns-operator-744455d44c-2qcrw\" (UID: \"70d54766-7f56-4fbc-acf2-0193dc9bf8c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091473 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65ebbe77-876f-45fd-8baf-2d375e7e1774-metrics-tls\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091491 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9128be3c-7611-4a51-b085-33b4019a0336-webhook-cert\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091513 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc91464-d549-47b8-a428-605eaa51a21e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091535 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/caa82913-e147-40d4-b5d6-c162427bbf32-srv-cert\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091556 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/caa82913-e147-40d4-b5d6-c162427bbf32-profile-collector-cert\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091579 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65ebbe77-876f-45fd-8baf-2d375e7e1774-trusted-ca\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091602 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvhw\" (UniqueName: \"kubernetes.io/projected/624441d9-c4a5-4642-b5fb-07b54e9f40e0-kube-api-access-cqvhw\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091625 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzslj\" (UniqueName: \"kubernetes.io/projected/8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0-kube-api-access-hzslj\") pod \"multus-admission-controller-857f4d67dd-r5dtt\" (UID: \"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091649 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a406634b-d850-4e1f-af04-f1ea77244ce1-proxy-tls\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091680 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppj6\" (UniqueName: \"kubernetes.io/projected/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-kube-api-access-gppj6\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091702 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82f5fb5c-84e5-483b-9e21-5a7849856d41-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091724 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8786f1-65c2-4086-9e36-b040560dcdd4-config\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091749 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfdae196-c821-4f78-9191-890d25ca0e54-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.091781 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55af32b-969b-4bec-b0b4-49a1cacf5753-config\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093752 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7jvf\" (UniqueName: \"kubernetes.io/projected/65ebbe77-876f-45fd-8baf-2d375e7e1774-kube-api-access-j7jvf\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093807 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/509d0e75-5373-44d4-9053-14d595587d05-signing-cabundle\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093833 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/888365e6-5672-42f7-ba73-de140fe8ea0a-secret-volume\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093854 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd4nh\" (UniqueName: \"kubernetes.io/projected/a30726df-cfa8-4da0-9aa6-419437441379-kube-api-access-fd4nh\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093881 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp25d\" (UniqueName: \"kubernetes.io/projected/70d54766-7f56-4fbc-acf2-0193dc9bf8c1-kube-api-access-dp25d\") pod \"dns-operator-744455d44c-2qcrw\" (UID: \"70d54766-7f56-4fbc-acf2-0193dc9bf8c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093916 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-socket-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093938 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khr5z\" (UniqueName: \"kubernetes.io/projected/752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f-kube-api-access-khr5z\") pod \"migrator-59844c95c7-57dhz\" (UID: \"752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093959 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnbcs\" (UniqueName: \"kubernetes.io/projected/27ee8df2-66e3-4de7-a2c3-c0687e535125-kube-api-access-rnbcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-4p2x6\" (UID: \"27ee8df2-66e3-4de7-a2c3-c0687e535125\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.093980 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07493bb4-1b2a-4770-8a0f-67ea302818c4-cert\") pod \"ingress-canary-kbfs9\" (UID: \"07493bb4-1b2a-4770-8a0f-67ea302818c4\") " pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094000 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/278370ba-36fe-40ff-8719-19b42b0357be-service-ca-bundle\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094021 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55af32b-969b-4bec-b0b4-49a1cacf5753-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094042 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f5fb5c-84e5-483b-9e21-5a7849856d41-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094081 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cfdae196-c821-4f78-9191-890d25ca0e54-proxy-tls\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094102 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc91464-d549-47b8-a428-605eaa51a21e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094124 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-registration-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094144 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-stats-auth\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094163 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b55af32b-969b-4bec-b0b4-49a1cacf5753-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094187 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-serving-cert\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094212 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp725\" (UniqueName: \"kubernetes.io/projected/4fc91464-d549-47b8-a428-605eaa51a21e-kube-api-access-kp725\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094240 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65ebbe77-876f-45fd-8baf-2d375e7e1774-bound-sa-token\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094262 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfkpw\" (UniqueName: \"kubernetes.io/projected/cfdae196-c821-4f78-9191-890d25ca0e54-kube-api-access-tfkpw\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094290 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cg9x\" (UniqueName: \"kubernetes.io/projected/07493bb4-1b2a-4770-8a0f-67ea302818c4-kube-api-access-8cg9x\") pod \"ingress-canary-kbfs9\" (UID: \"07493bb4-1b2a-4770-8a0f-67ea302818c4\") " pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094311 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qcwg\" (UniqueName: \"kubernetes.io/projected/9128be3c-7611-4a51-b085-33b4019a0336-kube-api-access-8qcwg\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094333 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094369 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfqx\" (UniqueName: \"kubernetes.io/projected/c2d48829-9085-45ca-bf9c-cc90d68a94a3-kube-api-access-hwfqx\") pod \"package-server-manager-789f6589d5-j286m\" (UID: \"c2d48829-9085-45ca-bf9c-cc90d68a94a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094391 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-metrics-certs\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094422 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thfpf\" (UniqueName: \"kubernetes.io/projected/278370ba-36fe-40ff-8719-19b42b0357be-kube-api-access-thfpf\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094447 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a406634b-d850-4e1f-af04-f1ea77244ce1-images\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.094524 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c2d48829-9085-45ca-bf9c-cc90d68a94a3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j286m\" (UID: \"c2d48829-9085-45ca-bf9c-cc90d68a94a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.094644 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:55.594624724 +0000 UTC m=+148.239171484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.095385 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l6zs2"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.095842 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9128be3c-7611-4a51-b085-33b4019a0336-tmpfs\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.097415 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a406634b-d850-4e1f-af04-f1ea77244ce1-images\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.097586 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f5fb5c-84e5-483b-9e21-5a7849856d41-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.098511 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/caa82913-e147-40d4-b5d6-c162427bbf32-profile-collector-cert\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.098663 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.100495 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/29788853-d1f5-46e3-af8c-963fa9d4fef4-node-bootstrap-token\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.102851 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48d333bd-5cb1-47e5-ad50-3d17246d36fe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.103403 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-config\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.103896 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.103909 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fc91464-d549-47b8-a428-605eaa51a21e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.104295 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-registration-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.104710 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/27ee8df2-66e3-4de7-a2c3-c0687e535125-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4p2x6\" (UID: \"27ee8df2-66e3-4de7-a2c3-c0687e535125\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.104753 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/29788853-d1f5-46e3-af8c-963fa9d4fef4-certs\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.105182 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65ebbe77-876f-45fd-8baf-2d375e7e1774-trusted-ca\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.105401 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-bound-sa-token\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.105474 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-plugins-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.105770 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/509d0e75-5373-44d4-9053-14d595587d05-signing-cabundle\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.105920 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a406634b-d850-4e1f-af04-f1ea77244ce1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.107488 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/888365e6-5672-42f7-ba73-de140fe8ea0a-config-volume\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.108094 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82f5fb5c-84e5-483b-9e21-5a7849856d41-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.109102 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/278370ba-36fe-40ff-8719-19b42b0357be-service-ca-bundle\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.109326 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-csi-data-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.109731 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-serving-cert\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.110002 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-config-volume\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.110156 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-mountpoint-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.110344 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.111224 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-srv-cert\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.111285 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/509d0e75-5373-44d4-9053-14d595587d05-signing-key\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.111395 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9128be3c-7611-4a51-b085-33b4019a0336-apiservice-cert\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.111413 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48d333bd-5cb1-47e5-ad50-3d17246d36fe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.112003 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/624441d9-c4a5-4642-b5fb-07b54e9f40e0-socket-dir\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.112605 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-r5dtt\" (UID: \"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.113739 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/caa82913-e147-40d4-b5d6-c162427bbf32-srv-cert\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.114934 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzqqg\" (UniqueName: \"kubernetes.io/projected/5816a839-8a48-4e39-ae5e-82df31d282df-kube-api-access-kzqqg\") pod \"route-controller-manager-6576b87f9c-97vvp\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.115282 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8786f1-65c2-4086-9e36-b040560dcdd4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.115683 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-stats-auth\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.116472 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cfdae196-c821-4f78-9191-890d25ca0e54-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.117004 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55af32b-969b-4bec-b0b4-49a1cacf5753-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.117053 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8786f1-65c2-4086-9e36-b040560dcdd4-config\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.117589 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07493bb4-1b2a-4770-8a0f-67ea302818c4-cert\") pod \"ingress-canary-kbfs9\" (UID: \"07493bb4-1b2a-4770-8a0f-67ea302818c4\") " pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.117898 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9128be3c-7611-4a51-b085-33b4019a0336-webhook-cert\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.117999 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55af32b-969b-4bec-b0b4-49a1cacf5753-config\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.118056 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-metrics-tls\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.118932 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-default-certificate\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.120153 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fc91464-d549-47b8-a428-605eaa51a21e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.120360 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cfdae196-c821-4f78-9191-890d25ca0e54-proxy-tls\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.121986 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/70d54766-7f56-4fbc-acf2-0193dc9bf8c1-metrics-tls\") pod \"dns-operator-744455d44c-2qcrw\" (UID: \"70d54766-7f56-4fbc-acf2-0193dc9bf8c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.122625 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a406634b-d850-4e1f-af04-f1ea77244ce1-proxy-tls\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.124483 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/888365e6-5672-42f7-ba73-de140fe8ea0a-secret-volume\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.137932 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7f7j\" (UniqueName: \"kubernetes.io/projected/fa83c3a2-0e3f-4396-8693-69a92bf8a423-kube-api-access-d7f7j\") pod \"machine-approver-56656f9798-7hfbg\" (UID: \"fa83c3a2-0e3f-4396-8693-69a92bf8a423\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.148246 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65ebbe77-876f-45fd-8baf-2d375e7e1774-metrics-tls\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.150414 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/278370ba-36fe-40ff-8719-19b42b0357be-metrics-certs\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.154940 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m9gs\" (UniqueName: \"kubernetes.io/projected/a67cc5ad-43f8-4b5d-846c-981ca3b07e1a-kube-api-access-7m9gs\") pod \"openshift-apiserver-operator-796bbdcf4f-bsznl\" (UID: \"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.172470 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvj2j\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-kube-api-access-nvj2j\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.195416 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.195713 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.195860 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:55.69584119 +0000 UTC m=+148.340387950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.196416 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kjl\" (UniqueName: \"kubernetes.io/projected/40a01e3d-81aa-4444-93d8-c24228829b34-kube-api-access-b7kjl\") pod \"etcd-operator-b45778765-2pkc8\" (UID: \"40a01e3d-81aa-4444-93d8-c24228829b34\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.216553 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp4lm\" (UniqueName: \"kubernetes.io/projected/64c8e38f-52cb-4101-b631-177fc6ed9086-kube-api-access-qp4lm\") pod \"cluster-samples-operator-665b6dd947-w8qrd\" (UID: \"64c8e38f-52cb-4101-b631-177fc6ed9086\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.222943 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2np4w"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.229954 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.253040 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxwn\" (UniqueName: \"kubernetes.io/projected/509d0e75-5373-44d4-9053-14d595587d05-kube-api-access-zhxwn\") pod \"service-ca-9c57cc56f-cswhh\" (UID: \"509d0e75-5373-44d4-9053-14d595587d05\") " pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.270359 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.275256 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7qv4\" (UniqueName: \"kubernetes.io/projected/a406634b-d850-4e1f-af04-f1ea77244ce1-kube-api-access-p7qv4\") pod \"machine-config-operator-74547568cd-bk6h2\" (UID: \"a406634b-d850-4e1f-af04-f1ea77244ce1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.284237 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.290817 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.297326 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.297862 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:55.797840247 +0000 UTC m=+148.442386947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.301120 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8tr\" (UniqueName: \"kubernetes.io/projected/888365e6-5672-42f7-ba73-de140fe8ea0a-kube-api-access-dh8tr\") pod \"collect-profiles-29484990-9qpp5\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.310611 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:55 crc kubenswrapper[4704]: W0122 16:30:55.312656 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa83c3a2_0e3f_4396_8693_69a92bf8a423.slice/crio-26cac60bc6479094dde27340ce9ebc60b06f7d978abb657501e1460cd9c2f2b7 WatchSource:0}: Error finding container 26cac60bc6479094dde27340ce9ebc60b06f7d978abb657501e1460cd9c2f2b7: Status 404 returned error can't find the container with id 26cac60bc6479094dde27340ce9ebc60b06f7d978abb657501e1460cd9c2f2b7 Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.314349 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd4nh\" (UniqueName: \"kubernetes.io/projected/a30726df-cfa8-4da0-9aa6-419437441379-kube-api-access-fd4nh\") pod \"marketplace-operator-79b997595-lx7sw\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.335876 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xl8f\" (UniqueName: \"kubernetes.io/projected/48d333bd-5cb1-47e5-ad50-3d17246d36fe-kube-api-access-6xl8f\") pod \"openshift-controller-manager-operator-756b6f6bc6-zkl2z\" (UID: \"48d333bd-5cb1-47e5-ad50-3d17246d36fe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.338375 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.355776 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-92qrn"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.366765 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnbcs\" (UniqueName: \"kubernetes.io/projected/27ee8df2-66e3-4de7-a2c3-c0687e535125-kube-api-access-rnbcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-4p2x6\" (UID: \"27ee8df2-66e3-4de7-a2c3-c0687e535125\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.367125 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.373823 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.386300 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65ebbe77-876f-45fd-8baf-2d375e7e1774-bound-sa-token\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.399767 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.400430 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:55.900411699 +0000 UTC m=+148.544958399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.405575 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp725\" (UniqueName: \"kubernetes.io/projected/4fc91464-d549-47b8-a428-605eaa51a21e-kube-api-access-kp725\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rdsv\" (UID: \"4fc91464-d549-47b8-a428-605eaa51a21e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.412151 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.427251 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh994\" (UniqueName: \"kubernetes.io/projected/046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c-kube-api-access-qh994\") pod \"service-ca-operator-777779d784-pbnj7\" (UID: \"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.454323 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khr5z\" (UniqueName: \"kubernetes.io/projected/752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f-kube-api-access-khr5z\") pod \"migrator-59844c95c7-57dhz\" (UID: \"752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.464459 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82f5fb5c-84e5-483b-9e21-5a7849856d41-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w79nv\" (UID: \"82f5fb5c-84e5-483b-9e21-5a7849856d41\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.471648 4704 generic.go:334] "Generic (PLEG): container finished" podID="25a91c52-a0f3-43ea-b8e5-4bd074ef16b0" containerID="7f8a0403e79bb85bdda048386407283a9dc9368989b8f2120ebde908ad726a9e" exitCode=0 Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.472290 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" event={"ID":"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0","Type":"ContainerDied","Data":"7f8a0403e79bb85bdda048386407283a9dc9368989b8f2120ebde908ad726a9e"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.472330 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" event={"ID":"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0","Type":"ContainerStarted","Data":"084d971d2e6ce0cea2bd1246c49368039c3036b5d262b328d7b519eae446917f"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.482879 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp25d\" (UniqueName: \"kubernetes.io/projected/70d54766-7f56-4fbc-acf2-0193dc9bf8c1-kube-api-access-dp25d\") pod \"dns-operator-744455d44c-2qcrw\" (UID: \"70d54766-7f56-4fbc-acf2-0193dc9bf8c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.501241 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppj6\" (UniqueName: \"kubernetes.io/projected/4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb-kube-api-access-gppj6\") pod \"olm-operator-6b444d44fb-4dn9x\" (UID: \"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.501317 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.501758 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.001732048 +0000 UTC m=+148.646278748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.508129 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" event={"ID":"890108ab-72eb-4eed-8d33-5abf5494b6d5","Type":"ContainerStarted","Data":"96ce8ca82e0c86db95796c3d213f2e7c968c6ed16c3e07538c2dd374471235fa"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.508170 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" event={"ID":"890108ab-72eb-4eed-8d33-5abf5494b6d5","Type":"ContainerStarted","Data":"77d914c8c619a13fdfacb7fc1abcd3f8208d44631ac509d54fd6d7cb268aaf1d"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.516143 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.517105 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gm4s\" (UniqueName: \"kubernetes.io/projected/29788853-d1f5-46e3-af8c-963fa9d4fef4-kube-api-access-6gm4s\") pod \"machine-config-server-fxzwk\" (UID: \"29788853-d1f5-46e3-af8c-963fa9d4fef4\") " pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.533537 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvhw\" (UniqueName: \"kubernetes.io/projected/624441d9-c4a5-4642-b5fb-07b54e9f40e0-kube-api-access-cqvhw\") pod \"csi-hostpathplugin-whsbg\" (UID: \"624441d9-c4a5-4642-b5fb-07b54e9f40e0\") " pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.539150 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.549905 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5f7b353f12def248b17548ec7bec0690f9e9a948b58a75c89590878a858f4d18"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.550126 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.550529 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.571724 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.577587 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b55af32b-969b-4bec-b0b4-49a1cacf5753-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f97bj\" (UID: \"b55af32b-969b-4bec-b0b4-49a1cacf5753\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.591047 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.591806 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.598845 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whl5f\" (UniqueName: \"kubernetes.io/projected/9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74-kube-api-access-whl5f\") pod \"dns-default-kxvpl\" (UID: \"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74\") " pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.599272 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.602691 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzslj\" (UniqueName: \"kubernetes.io/projected/8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0-kube-api-access-hzslj\") pod \"multus-admission-controller-857f4d67dd-r5dtt\" (UID: \"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.603082 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" event={"ID":"aef72b7b-ce60-41c1-903a-16ebddec4d6f","Type":"ContainerStarted","Data":"22a8f86caa6a0bba218ce3af799b77d23dcabb8cfb7104850f856be7fcf999ce"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.604912 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.605493 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.605940 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.105927482 +0000 UTC m=+148.750474182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.606599 4704 generic.go:334] "Generic (PLEG): container finished" podID="97a55eb5-6536-4b57-ba38-39e6739d8188" containerID="6bb340ddaf504bd8a87e1866dde0ad46d309904fe580e7ebe0eba2c2e5ecbb5c" exitCode=0 Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.606644 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" event={"ID":"97a55eb5-6536-4b57-ba38-39e6739d8188","Type":"ContainerDied","Data":"6bb340ddaf504bd8a87e1866dde0ad46d309904fe580e7ebe0eba2c2e5ecbb5c"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.606662 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" event={"ID":"97a55eb5-6536-4b57-ba38-39e6739d8188","Type":"ContainerStarted","Data":"3f3236cada1ba98a5a22fd09cde0dd5e3e00147afbf13e9c33d2540cd3312797"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.615104 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khgwd" event={"ID":"5ba602c9-6155-46ca-baa1-0cfcd35cab16","Type":"ContainerStarted","Data":"a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.615159 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khgwd" event={"ID":"5ba602c9-6155-46ca-baa1-0cfcd35cab16","Type":"ContainerStarted","Data":"164a1094c520e39c5d6eb2c6b5b2a002a0c818bda5a777af43f41cbc090212a3"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.618029 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2np4w" event={"ID":"24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8","Type":"ContainerStarted","Data":"5923caa985608d79e819861e83f7b65bbaa8792d644d5d0e49d14829ab3087c8"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.618761 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" event={"ID":"fa83c3a2-0e3f-4396-8693-69a92bf8a423","Type":"ContainerStarted","Data":"26cac60bc6479094dde27340ce9ebc60b06f7d978abb657501e1460cd9c2f2b7"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.622410 4704 generic.go:334] "Generic (PLEG): container finished" podID="dff255df-bf8a-498d-b3f6-4f8e65a7b6fc" containerID="d745ea016b7bf920c9561a7fecefd9aed7498bdd26371711a48aacad6544ed38" exitCode=0 Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.622463 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" event={"ID":"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc","Type":"ContainerDied","Data":"d745ea016b7bf920c9561a7fecefd9aed7498bdd26371711a48aacad6544ed38"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.622482 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" event={"ID":"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc","Type":"ContainerStarted","Data":"e02a2829e7013f27a6d045569623ea2143b0b215b77d87f6f350e3c74de504e6"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.633469 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfpf\" (UniqueName: \"kubernetes.io/projected/278370ba-36fe-40ff-8719-19b42b0357be-kube-api-access-thfpf\") pod \"router-default-5444994796-gllz9\" (UID: \"278370ba-36fe-40ff-8719-19b42b0357be\") " pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.646207 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.648241 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cg9x\" (UniqueName: \"kubernetes.io/projected/07493bb4-1b2a-4770-8a0f-67ea302818c4-kube-api-access-8cg9x\") pod \"ingress-canary-kbfs9\" (UID: \"07493bb4-1b2a-4770-8a0f-67ea302818c4\") " pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.651270 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.663304 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb8786f1-65c2-4086-9e36-b040560dcdd4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s6kjm\" (UID: \"fb8786f1-65c2-4086-9e36-b040560dcdd4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.674475 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfqx\" (UniqueName: \"kubernetes.io/projected/c2d48829-9085-45ca-bf9c-cc90d68a94a3-kube-api-access-hwfqx\") pod \"package-server-manager-789f6589d5-j286m\" (UID: \"c2d48829-9085-45ca-bf9c-cc90d68a94a3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.677190 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" event={"ID":"cd037191-da3d-4f66-9d51-bd18a3ba0082","Type":"ContainerStarted","Data":"fe40699cb4b385b2e70d3a2a9c9b1571da03c11a3b4236e60411562744ac9bc1"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.677227 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" event={"ID":"cd037191-da3d-4f66-9d51-bd18a3ba0082","Type":"ContainerStarted","Data":"0cbe8e6401a63d8afd65ddb33d6e7f1b00637c0ebfe9421bb9ca0f3a947ba511"} Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.699239 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.704664 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfkpw\" (UniqueName: \"kubernetes.io/projected/cfdae196-c821-4f78-9191-890d25ca0e54-kube-api-access-tfkpw\") pod \"machine-config-controller-84d6567774-q6ffh\" (UID: \"cfdae196-c821-4f78-9191-890d25ca0e54\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.705149 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fxzwk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.707455 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.707642 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.20762053 +0000 UTC m=+148.852167240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.707922 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kbfs9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.708708 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.709034 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.209020687 +0000 UTC m=+148.853567387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.715342 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qcwg\" (UniqueName: \"kubernetes.io/projected/9128be3c-7611-4a51-b085-33b4019a0336-kube-api-access-8qcwg\") pod \"packageserver-d55dfcdfc-24b8b\" (UID: \"9128be3c-7611-4a51-b085-33b4019a0336\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.736468 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7jvf\" (UniqueName: \"kubernetes.io/projected/65ebbe77-876f-45fd-8baf-2d375e7e1774-kube-api-access-j7jvf\") pod \"ingress-operator-5b745b69d9-glvzp\" (UID: \"65ebbe77-876f-45fd-8baf-2d375e7e1774\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.754969 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.770311 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w9rk\" (UniqueName: \"kubernetes.io/projected/caa82913-e147-40d4-b5d6-c162427bbf32-kube-api-access-4w9rk\") pod \"catalog-operator-68c6474976-jksvk\" (UID: \"caa82913-e147-40d4-b5d6-c162427bbf32\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.766788 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2pkc8"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.810214 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.811205 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.311184438 +0000 UTC m=+148.955731138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.825332 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.864361 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.864757 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.877262 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.915083 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.921855 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hgdwt"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.921902 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd"] Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.922085 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.926004 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:55 crc kubenswrapper[4704]: E0122 16:30:55.926297 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.426285726 +0000 UTC m=+149.070832426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.928412 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:55 crc kubenswrapper[4704]: I0122 16:30:55.968887 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" Jan 22 16:30:55 crc kubenswrapper[4704]: W0122 16:30:55.975418 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29788853_d1f5_46e3_af8c_963fa9d4fef4.slice/crio-4dd9552dab3f6e1d88b682b7f5c2ab4dc486a464e52e902bf06d9c93eb222957 WatchSource:0}: Error finding container 4dd9552dab3f6e1d88b682b7f5c2ab4dc486a464e52e902bf06d9c93eb222957: Status 404 returned error can't find the container with id 4dd9552dab3f6e1d88b682b7f5c2ab4dc486a464e52e902bf06d9c93eb222957 Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.026726 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.027246 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.527204455 +0000 UTC m=+149.171751155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.128427 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.129089 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.629074198 +0000 UTC m=+149.273620898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.234437 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.234921 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.734871694 +0000 UTC m=+149.379418394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.344742 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.345441 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.845430093 +0000 UTC m=+149.489976793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.455440 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.455764 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:56.955746136 +0000 UTC m=+149.600292836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.468611 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl"] Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.468746 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5"] Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.512751 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2"] Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.514333 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lvsjg"] Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.559934 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.560395 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.060371461 +0000 UTC m=+149.704918161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.648937 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" event={"ID":"25a91c52-a0f3-43ea-b8e5-4bd074ef16b0","Type":"ContainerStarted","Data":"42d6a37ab9a64467f67b73329474fabf87f20df825f105f543ae4f5d56e99aa8"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.650947 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" event={"ID":"aef72b7b-ce60-41c1-903a-16ebddec4d6f","Type":"ContainerStarted","Data":"fa45ff431904d842dabcc7332822f93ecd838e4e1348f8d3b994f8e80f4d432b"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.651380 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.653352 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" event={"ID":"dff255df-bf8a-498d-b3f6-4f8e65a7b6fc","Type":"ContainerStarted","Data":"710f99157dbf0c83437785ab37f91d73a9eccbeaba662f2bdd7a8445458e52dd"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.653650 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.654690 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fxzwk" event={"ID":"29788853-d1f5-46e3-af8c-963fa9d4fef4","Type":"ContainerStarted","Data":"db5828fbc04c7e8a68970160af068ecf93d5967bc1ede9b174cb5db3b1019692"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.654723 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fxzwk" event={"ID":"29788853-d1f5-46e3-af8c-963fa9d4fef4","Type":"ContainerStarted","Data":"4dd9552dab3f6e1d88b682b7f5c2ab4dc486a464e52e902bf06d9c93eb222957"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.655396 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gllz9" event={"ID":"278370ba-36fe-40ff-8719-19b42b0357be","Type":"ContainerStarted","Data":"34ad07d23e619d8f5405904458bfc7d52d1d63802493e53221b7538dddccdabf"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.656313 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" event={"ID":"5816a839-8a48-4e39-ae5e-82df31d282df","Type":"ContainerStarted","Data":"42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.656339 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" event={"ID":"5816a839-8a48-4e39-ae5e-82df31d282df","Type":"ContainerStarted","Data":"8c6cc3e869d47ce0a49bec3454eae50adcd94850ff9dcebe094e8c7699c13b44"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.656486 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.661538 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.661655 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.161620979 +0000 UTC m=+149.806167679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.661841 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.661997 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" event={"ID":"97a55eb5-6536-4b57-ba38-39e6739d8188","Type":"ContainerStarted","Data":"5bc1496cdb0c248dd817be4fd554000a54368d11c4e14c5ec60571ccc7705b20"} Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.662210 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.162192753 +0000 UTC m=+149.806739443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.663046 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" event={"ID":"40a01e3d-81aa-4444-93d8-c24228829b34","Type":"ContainerStarted","Data":"491b2a713968878880a44da5fd98c141b69a1352520cb3b6dacd33cb75b2af16"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.669081 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2np4w" event={"ID":"24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8","Type":"ContainerStarted","Data":"22190bc20c525667959a14dbcc066cdd7f5a2f90b4434b0894e98b90fffab5c9"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.669656 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2np4w" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.682075 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" event={"ID":"822794ef-a29d-43bb-8e01-ab9aa44ed0be","Type":"ContainerStarted","Data":"d423f5c5cfc6ffe7946dc262dacb7f73a62a14831b776b6924e05ee02b5e0dda"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.682122 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" event={"ID":"822794ef-a29d-43bb-8e01-ab9aa44ed0be","Type":"ContainerStarted","Data":"5cf6e35abd06cf6dbad1e189def3e3de4bfc853775cc8ef433c16053f60e6553"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.683349 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" event={"ID":"64c8e38f-52cb-4101-b631-177fc6ed9086","Type":"ContainerStarted","Data":"a5a624309bbf685d0852bc18b03605e127ec309cc7e47b1f08fabdb0e50744f9"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.685984 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-92qrn" event={"ID":"f3395d1f-e400-4f01-87c2-7321f583d6d3","Type":"ContainerStarted","Data":"e6c1d045e546906677fc13dbdbe04ddd9465cf5df608b27ce5dfdd5952a0b4b4"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.686038 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-92qrn" event={"ID":"f3395d1f-e400-4f01-87c2-7321f583d6d3","Type":"ContainerStarted","Data":"57170ffe41d1bcf71599b65a2535f6916f7abed3f161702ada3d0e362f7d5f7a"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.686298 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.687976 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" event={"ID":"fa83c3a2-0e3f-4396-8693-69a92bf8a423","Type":"ContainerStarted","Data":"f45f1dfc1ad88ad95dd4e97409c94d4098f14c1c8ba3fdf3e59a4ba193243c73"} Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.711785 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mlsxz" podStartSLOduration=130.711765255 podStartE2EDuration="2m10.711765255s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:56.711625731 +0000 UTC m=+149.356172431" watchObservedRunningTime="2026-01-22 16:30:56.711765255 +0000 UTC m=+149.356311955" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.757088 4704 patch_prober.go:28] interesting pod/downloads-7954f5f757-2np4w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.757133 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2np4w" podUID="24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.759509 4704 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-97vvp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.759536 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" podUID="5816a839-8a48-4e39-ae5e-82df31d282df" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.763278 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.766760 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.266736657 +0000 UTC m=+149.911283367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.867740 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.870750 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.370735075 +0000 UTC m=+150.015281775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.928876 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:30:56 crc kubenswrapper[4704]: I0122 16:30:56.969321 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:56 crc kubenswrapper[4704]: E0122 16:30:56.969894 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.469878498 +0000 UTC m=+150.114425198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.072772 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.073773 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.573759364 +0000 UTC m=+150.218306064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.101048 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-92qrn" Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.137220 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-cpq2f" podStartSLOduration=131.137181546 podStartE2EDuration="2m11.137181546s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:57.133158791 +0000 UTC m=+149.777705491" watchObservedRunningTime="2026-01-22 16:30:57.137181546 +0000 UTC m=+149.781728246" Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.169932 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cswhh"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.175219 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.175472 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.675439722 +0000 UTC m=+150.319986422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.180316 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.201384 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.280601 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.280921 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.780909689 +0000 UTC m=+150.425456389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.342649 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.374571 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kxvpl"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.392666 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.393173 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:57.893157763 +0000 UTC m=+150.537704463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.428832 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.442889 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.479180 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-whsbg"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.502128 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.502998 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.002980904 +0000 UTC m=+150.647527594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.531988 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.540137 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2qcrw"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.571084 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-r5dtt"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.611432 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.611691 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.111676245 +0000 UTC m=+150.756222945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: W0122 16:30:57.663133 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d0d5c5a_c1f7_4bc0_ad85_b4280f1f5fb0.slice/crio-f6356b9c6feeffadd216da5e2ba7120c4d5407f9ad9013b86ef46010e61539fb WatchSource:0}: Error finding container f6356b9c6feeffadd216da5e2ba7120c4d5407f9ad9013b86ef46010e61539fb: Status 404 returned error can't find the container with id f6356b9c6feeffadd216da5e2ba7120c4d5407f9ad9013b86ef46010e61539fb Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.677061 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lx7sw"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.712126 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.738058 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.738623 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.238608281 +0000 UTC m=+150.883154981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.757363 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-khgwd" podStartSLOduration=131.757342199 podStartE2EDuration="2m11.757342199s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:57.739037732 +0000 UTC m=+150.383584442" watchObservedRunningTime="2026-01-22 16:30:57.757342199 +0000 UTC m=+150.401888899" Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.758925 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kbfs9"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.784664 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gllz9" event={"ID":"278370ba-36fe-40ff-8719-19b42b0357be","Type":"ContainerStarted","Data":"2328d3e5fe307babbf1ec07c51b6d462b763151e72015066b81eefc04758afa8"} Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.817062 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.817262 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" event={"ID":"48d333bd-5cb1-47e5-ad50-3d17246d36fe","Type":"ContainerStarted","Data":"c9f4eb7207a5cf796168a746a325c315162c25ec56064c2e07e015bb0285f86b"} Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.841628 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.841950 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.341936163 +0000 UTC m=+150.986482863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.854328 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.856246 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.861634 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m"] Jan 22 16:30:57 crc kubenswrapper[4704]: W0122 16:30:57.869321 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ee8df2_66e3_4de7_a2c3_c0687e535125.slice/crio-19bb09f266c90a8f0a0278efc36d4981e386594c0c776e144f1a1ec139a12d12 WatchSource:0}: Error finding container 19bb09f266c90a8f0a0278efc36d4981e386594c0c776e144f1a1ec139a12d12: Status 404 returned error can't find the container with id 19bb09f266c90a8f0a0278efc36d4981e386594c0c776e144f1a1ec139a12d12 Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.870652 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.879931 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.887725 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" event={"ID":"40a01e3d-81aa-4444-93d8-c24228829b34","Type":"ContainerStarted","Data":"fd97891300857b0cdb247fdf3bc807a36fc821d5abbbe5c2a7e1e440f09387e6"} Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.894633 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" event={"ID":"fa83c3a2-0e3f-4396-8693-69a92bf8a423","Type":"ContainerStarted","Data":"cadeab9d3fc1508a09777cd02936daba2b2104f478bf254a2d0a04bb20a6e82b"} Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.896665 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:30:57 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:30:57 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:30:57 crc kubenswrapper[4704]: healthz check failed Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.896735 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.925507 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fxzwk" podStartSLOduration=5.925481849 podStartE2EDuration="5.925481849s" podCreationTimestamp="2026-01-22 16:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:57.915692884 +0000 UTC m=+150.560239584" watchObservedRunningTime="2026-01-22 16:30:57.925481849 +0000 UTC m=+150.570028549" Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.935714 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.941758 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" event={"ID":"888365e6-5672-42f7-ba73-de140fe8ea0a","Type":"ContainerStarted","Data":"0733fa15dc7129ffbaf47abf8a1f369d1ec11721281c0fee51dc6a993c68f2ad"} Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.941878 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" event={"ID":"888365e6-5672-42f7-ba73-de140fe8ea0a","Type":"ContainerStarted","Data":"a1dcc0dadba05b0d76e689b0c7b2d6f7f069eeae74004a3fb08c18e1ec3b8d0d"} Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.943572 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:57 crc kubenswrapper[4704]: E0122 16:30:57.944049 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.444032742 +0000 UTC m=+151.088579442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.956878 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj"] Jan 22 16:30:57 crc kubenswrapper[4704]: I0122 16:30:57.958656 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" event={"ID":"4fc91464-d549-47b8-a428-605eaa51a21e","Type":"ContainerStarted","Data":"798dbb163543b587ad2f396579d2a07ca07d77eecbc815cafbd3dfbd218e5654"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.006391 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" event={"ID":"64c8e38f-52cb-4101-b631-177fc6ed9086","Type":"ContainerStarted","Data":"8f928c1f133633d74fb41862512d6b0dab4b3a6c04d71721dbbf1d8dc41dbff5"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.020548 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" event={"ID":"70d54766-7f56-4fbc-acf2-0193dc9bf8c1","Type":"ContainerStarted","Data":"f01c6e3a6fbf46b05e81f2eb467cdcc1e2d200f1d735d5bbb4d1465dc2880fb4"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.022505 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" podStartSLOduration=132.022491526 podStartE2EDuration="2m12.022491526s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.02151469 +0000 UTC m=+150.666061390" watchObservedRunningTime="2026-01-22 16:30:58.022491526 +0000 UTC m=+150.667038226" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.030372 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" event={"ID":"82f5fb5c-84e5-483b-9e21-5a7849856d41","Type":"ContainerStarted","Data":"fb89ffc649441ac8da6e66c8ee1022bc7177506db0041ff3139b0c2e2142d6d2"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.047532 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.048090 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.548059242 +0000 UTC m=+151.192605962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.062616 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" event={"ID":"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb","Type":"ContainerStarted","Data":"f89d5b985355e618dbd9a5d62e5b9fc68c47d0a76fea3596cbc0ee23063763f0"} Jan 22 16:30:58 crc kubenswrapper[4704]: W0122 16:30:58.081461 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65ebbe77_876f_45fd_8baf_2d375e7e1774.slice/crio-776ee7f1e0f7cfb426ca00e1e45e90c9ac19cd378104b3f141a413d181ad6391 WatchSource:0}: Error finding container 776ee7f1e0f7cfb426ca00e1e45e90c9ac19cd378104b3f141a413d181ad6391: Status 404 returned error can't find the container with id 776ee7f1e0f7cfb426ca00e1e45e90c9ac19cd378104b3f141a413d181ad6391 Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.081807 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" event={"ID":"624441d9-c4a5-4642-b5fb-07b54e9f40e0","Type":"ContainerStarted","Data":"9dc2e22b0e70163e10184afef95a265aecd9f67aa5980b2798ba5b59c6d71125"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.096504 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-92qrn" podStartSLOduration=132.096469002 podStartE2EDuration="2m12.096469002s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.043550644 +0000 UTC m=+150.688097344" watchObservedRunningTime="2026-01-22 16:30:58.096469002 +0000 UTC m=+150.741015722" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.104567 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" event={"ID":"752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f","Type":"ContainerStarted","Data":"20560048b73f7c5c2293921520627abd5b4febea995e6c1c5d2a729a3db7f534"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.139625 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" event={"ID":"822794ef-a29d-43bb-8e01-ab9aa44ed0be","Type":"ContainerStarted","Data":"6dcc1c61e0933e3238f677c2bd500c0a085c8e2589a2c4d46b11b99591c77f55"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.157890 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.158833 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.658764685 +0000 UTC m=+151.303311385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.184144 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" event={"ID":"08014b73-1836-45da-a3fa-8a05ad57ebad","Type":"ContainerStarted","Data":"5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.184199 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" event={"ID":"08014b73-1836-45da-a3fa-8a05ad57ebad","Type":"ContainerStarted","Data":"915fc3f33b7c5d97f8f307690aeacc4012a6aafd775e308d238ec46b3dc456a3"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.185098 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.191217 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" event={"ID":"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a","Type":"ContainerStarted","Data":"63928bba5e38f46f2e0af2404848f9aa1ea395340ba3b3e6a836d707b3b74b86"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.191245 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" event={"ID":"a67cc5ad-43f8-4b5d-846c-981ca3b07e1a","Type":"ContainerStarted","Data":"dbc4a4706919ebe08fe6292315250eb372d19b8e65e3fa5e2cf7cda4426de63b"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.199356 4704 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lvsjg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.199396 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" podUID="08014b73-1836-45da-a3fa-8a05ad57ebad" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.228075 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" event={"ID":"509d0e75-5373-44d4-9053-14d595587d05","Type":"ContainerStarted","Data":"0c00458f9707419384d76acf77a8e29964dd9a9699530c9fde2e835d1c0ad92f"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.259147 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kxvpl" event={"ID":"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74","Type":"ContainerStarted","Data":"a33696a12f0c34666610d38a7d746b13e1eea9aa63df9ac0908be1f2d9374fb7"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.271300 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.272827 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.772811236 +0000 UTC m=+151.417357936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.300744 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" event={"ID":"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c","Type":"ContainerStarted","Data":"e9e3f6008c399c4a08e90032de075add3ac9bded9268a529b2241d62ab513795"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.345116 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" event={"ID":"a406634b-d850-4e1f-af04-f1ea77244ce1","Type":"ContainerStarted","Data":"7df608c431b01f6617e250c8322c703d27c6b59133ae476a30625ca19ac81213"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.345480 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" event={"ID":"a406634b-d850-4e1f-af04-f1ea77244ce1","Type":"ContainerStarted","Data":"382dad3b2bd8634b0a76fd17a34833a63107f64b24415b65c319888a08b9d93e"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.366847 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" podStartSLOduration=132.366820894 podStartE2EDuration="2m12.366820894s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.260037823 +0000 UTC m=+150.904584623" watchObservedRunningTime="2026-01-22 16:30:58.366820894 +0000 UTC m=+151.011367594" Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.395163 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:58.895140622 +0000 UTC m=+151.539687332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.397298 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.412939 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" event={"ID":"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0","Type":"ContainerStarted","Data":"f6356b9c6feeffadd216da5e2ba7120c4d5407f9ad9013b86ef46010e61539fb"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.444595 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" podStartSLOduration=132.44457132 podStartE2EDuration="2m12.44457132s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.421616382 +0000 UTC m=+151.066163092" watchObservedRunningTime="2026-01-22 16:30:58.44457132 +0000 UTC m=+151.089118020" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.462870 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" podStartSLOduration=132.462855536 podStartE2EDuration="2m12.462855536s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.461255444 +0000 UTC m=+151.105802144" watchObservedRunningTime="2026-01-22 16:30:58.462855536 +0000 UTC m=+151.107402236" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.481165 4704 csr.go:261] certificate signing request csr-jl7r9 is approved, waiting to be issued Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.486642 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" event={"ID":"97a55eb5-6536-4b57-ba38-39e6739d8188","Type":"ContainerStarted","Data":"a0cb436ba756ce1823bfa857282610429d574ab768cfbce85e58a1969a3bb14c"} Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.490974 4704 patch_prober.go:28] interesting pod/downloads-7954f5f757-2np4w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.491020 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2np4w" podUID="24582dbd-6a5a-4b85-947a-e7bd9bf3dfa8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.500314 4704 csr.go:257] certificate signing request csr-jl7r9 is issued Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.501367 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.502917 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.002893959 +0000 UTC m=+151.647440659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.509007 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2np4w" podStartSLOduration=132.508974247 podStartE2EDuration="2m12.508974247s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.506293387 +0000 UTC m=+151.150840097" watchObservedRunningTime="2026-01-22 16:30:58.508974247 +0000 UTC m=+151.153520967" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.528657 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.587898 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgdwt" podStartSLOduration=132.587651556 podStartE2EDuration="2m12.587651556s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.542170332 +0000 UTC m=+151.186717042" watchObservedRunningTime="2026-01-22 16:30:58.587651556 +0000 UTC m=+151.232198256" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.589690 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" podStartSLOduration=132.589681399 podStartE2EDuration="2m12.589681399s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.587063281 +0000 UTC m=+151.231609981" watchObservedRunningTime="2026-01-22 16:30:58.589681399 +0000 UTC m=+151.234228099" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.603379 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.604769 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.104754842 +0000 UTC m=+151.749301542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.621782 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7hfbg" podStartSLOduration=132.621761695 podStartE2EDuration="2m12.621761695s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.620913543 +0000 UTC m=+151.265460253" watchObservedRunningTime="2026-01-22 16:30:58.621761695 +0000 UTC m=+151.266308395" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.705018 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.705294 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.20528013 +0000 UTC m=+151.849826830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.745257 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" podStartSLOduration=132.745240951 podStartE2EDuration="2m12.745240951s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.723831874 +0000 UTC m=+151.368378574" watchObservedRunningTime="2026-01-22 16:30:58.745240951 +0000 UTC m=+151.389787651" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.757038 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2pkc8" podStartSLOduration=132.757014218 podStartE2EDuration="2m12.757014218s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.684489429 +0000 UTC m=+151.329036129" watchObservedRunningTime="2026-01-22 16:30:58.757014218 +0000 UTC m=+151.401560918" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.759877 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bsznl" podStartSLOduration=132.759868932 podStartE2EDuration="2m12.759868932s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.754996685 +0000 UTC m=+151.399543385" watchObservedRunningTime="2026-01-22 16:30:58.759868932 +0000 UTC m=+151.404415632" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.810555 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.810887 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.310876011 +0000 UTC m=+151.955422711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.840507 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" podStartSLOduration=132.840489942 podStartE2EDuration="2m12.840489942s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.793380825 +0000 UTC m=+151.437927535" watchObservedRunningTime="2026-01-22 16:30:58.840489942 +0000 UTC m=+151.485036632" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.840680 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" podStartSLOduration=132.840675197 podStartE2EDuration="2m12.840675197s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.838590483 +0000 UTC m=+151.483137203" watchObservedRunningTime="2026-01-22 16:30:58.840675197 +0000 UTC m=+151.485221897" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.886323 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gllz9" podStartSLOduration=132.886273435 podStartE2EDuration="2m12.886273435s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.885663809 +0000 UTC m=+151.530210529" watchObservedRunningTime="2026-01-22 16:30:58.886273435 +0000 UTC m=+151.530820135" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.895957 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:30:58 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:30:58 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:30:58 crc kubenswrapper[4704]: healthz check failed Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.895997 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.911090 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:58 crc kubenswrapper[4704]: E0122 16:30:58.911455 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.41144095 +0000 UTC m=+152.055987650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:58 crc kubenswrapper[4704]: I0122 16:30:58.942335 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" podStartSLOduration=58.942312934 podStartE2EDuration="58.942312934s" podCreationTimestamp="2026-01-22 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:58.931981075 +0000 UTC m=+151.576527795" watchObservedRunningTime="2026-01-22 16:30:58.942312934 +0000 UTC m=+151.586859634" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.018701 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.019318 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.51930077 +0000 UTC m=+152.163847480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.121080 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.121667 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.621648986 +0000 UTC m=+152.266195676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.223893 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.224301 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.724280189 +0000 UTC m=+152.368826889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.324873 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.325679 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.82565637 +0000 UTC m=+152.470203070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.426028 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.426604 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:30:59.926331572 +0000 UTC m=+152.570878272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.501534 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 16:25:58 +0000 UTC, rotation deadline is 2026-10-09 11:19:51.679203709 +0000 UTC Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.501572 4704 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6234h48m52.177634417s for next certificate rotation Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.505491 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" event={"ID":"cfdae196-c821-4f78-9191-890d25ca0e54","Type":"ContainerStarted","Data":"51cd1a11c08221a109a09ad0a5668e9e3c4adf0d361731157f3242b96de745a2"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.505542 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" event={"ID":"cfdae196-c821-4f78-9191-890d25ca0e54","Type":"ContainerStarted","Data":"9ced665da5829af5ef2980cc78ebac3679cb858ea4667dacc1e3c76dbceeca5b"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.511930 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" event={"ID":"a30726df-cfa8-4da0-9aa6-419437441379","Type":"ContainerStarted","Data":"0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.511975 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" event={"ID":"a30726df-cfa8-4da0-9aa6-419437441379","Type":"ContainerStarted","Data":"74f4fc96cd3fb5ed9b46d3d7f546c8c660b51720ab45be891155e838ec3120a0"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.512224 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.513766 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" event={"ID":"48d333bd-5cb1-47e5-ad50-3d17246d36fe","Type":"ContainerStarted","Data":"777e42de437c7ab17b9dddfe2e5237116427ae54e08308a87a2d6dc139218f1b"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.517246 4704 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lx7sw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.517318 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.525712 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kbfs9" event={"ID":"07493bb4-1b2a-4770-8a0f-67ea302818c4","Type":"ContainerStarted","Data":"195a9de19f5d110ca3691a7f3f0c82b8991f2f53bbe707a7f2b38d46f3ea95c6"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.525769 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kbfs9" event={"ID":"07493bb4-1b2a-4770-8a0f-67ea302818c4","Type":"ContainerStarted","Data":"432e128b42a751a502e25b8ac2f33fdaf16684f5a17af58906f72d81f68cd92c"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.526571 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.527094 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.027068906 +0000 UTC m=+152.671615606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.527567 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.528015 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.02799907 +0000 UTC m=+152.672545780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.531148 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" event={"ID":"64c8e38f-52cb-4101-b631-177fc6ed9086","Type":"ContainerStarted","Data":"7b17ca15ee43822503ad0abc5da1f10dd38fab591a70532cca647bc065d53967"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.534372 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" podStartSLOduration=133.534354266 podStartE2EDuration="2m13.534354266s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:59.534169481 +0000 UTC m=+152.178716181" watchObservedRunningTime="2026-01-22 16:30:59.534354266 +0000 UTC m=+152.178900966" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.544675 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" event={"ID":"70d54766-7f56-4fbc-acf2-0193dc9bf8c1","Type":"ContainerStarted","Data":"f2a5d769575ad5bb1d19cf2a37bc2e3f34469bcb87865700c46c93d21b7de4cb"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.551896 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" event={"ID":"65ebbe77-876f-45fd-8baf-2d375e7e1774","Type":"ContainerStarted","Data":"776ee7f1e0f7cfb426ca00e1e45e90c9ac19cd378104b3f141a413d181ad6391"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.581510 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zkl2z" podStartSLOduration=133.581496843 podStartE2EDuration="2m13.581496843s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:59.581068542 +0000 UTC m=+152.225615242" watchObservedRunningTime="2026-01-22 16:30:59.581496843 +0000 UTC m=+152.226043543" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.583253 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" event={"ID":"fb8786f1-65c2-4086-9e36-b040560dcdd4","Type":"ContainerStarted","Data":"ba8fa09121fb0ffb1784972a4a2d37d3b947f4cfc55ab91b620e26783c75ec41"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.586863 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.587386 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.613304 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" event={"ID":"4c1fea53-1cb7-4d69-9f60-ffcf74ea35bb","Type":"ContainerStarted","Data":"74c1457f6b3ee73545472e8e74ae06414419188fad6e977bcf6357557fd14d38"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.614317 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.616990 4704 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8v4fz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]log ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]etcd ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/max-in-flight-filter ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 16:30:59 crc kubenswrapper[4704]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/openshift.io-startinformers ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 16:30:59 crc kubenswrapper[4704]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 16:30:59 crc kubenswrapper[4704]: livez check failed Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.617054 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" podUID="97a55eb5-6536-4b57-ba38-39e6739d8188" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.628238 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.629239 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.129222817 +0000 UTC m=+152.773769507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.655134 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w8qrd" podStartSLOduration=133.655116101 podStartE2EDuration="2m13.655116101s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:59.610710684 +0000 UTC m=+152.255257384" watchObservedRunningTime="2026-01-22 16:30:59.655116101 +0000 UTC m=+152.299662801" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.691465 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" event={"ID":"c2d48829-9085-45ca-bf9c-cc90d68a94a3","Type":"ContainerStarted","Data":"6a6d580f337b464713db09b18e6b7e7a82a2c725b59dd98ef90690ce85695088"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.691503 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pbnj7" event={"ID":"046b2cbe-50d4-4a8a-b8ba-3521b67c2f7c","Type":"ContainerStarted","Data":"22d64cb7d42d6a66464b9fbc5e9db4d1ef435ea44ca148e17dd66276608f5874"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.691545 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.720240 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" event={"ID":"b55af32b-969b-4bec-b0b4-49a1cacf5753","Type":"ContainerStarted","Data":"69921d211fac0227ee36bd9a7031c8e6c730ee5cad64d4cbd55465125437bf76"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.734269 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" event={"ID":"a406634b-d850-4e1f-af04-f1ea77244ce1","Type":"ContainerStarted","Data":"279fb82baa14608c323314896fe2fb4de451ded3a8baef4372c7075dd8234a69"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.736664 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" event={"ID":"752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f","Type":"ContainerStarted","Data":"3addf943ffcda4d4b3e30dd247fe7937b9a9a060ea7ec98423275aecdfd898fa"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.736691 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" event={"ID":"752dc3a8-6317-4ca6-9cfc-e7a3bf1c6e9f","Type":"ContainerStarted","Data":"984c27d1f41abe7c836dc494b7dc9032dbd0d4a189b1c47d9f49ae66d4c47139"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.737913 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.740016 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.240004352 +0000 UTC m=+152.884551052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.743855 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cswhh" event={"ID":"509d0e75-5373-44d4-9053-14d595587d05","Type":"ContainerStarted","Data":"df49317f4ffe13595025bb8edc4a8bf5886660b9b4547ad240e623023f4ba561"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.746404 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kxvpl" event={"ID":"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74","Type":"ContainerStarted","Data":"d99d114967ed546d485319b2eddd5c2b8c3c54119ccf28e8e3cfc603ad03038a"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.746939 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kxvpl" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.761753 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4dn9x" podStartSLOduration=133.761736318 podStartE2EDuration="2m13.761736318s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:59.657018331 +0000 UTC m=+152.301565021" watchObservedRunningTime="2026-01-22 16:30:59.761736318 +0000 UTC m=+152.406283018" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.783148 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" event={"ID":"624441d9-c4a5-4642-b5fb-07b54e9f40e0","Type":"ContainerStarted","Data":"b21ec6139ae6b04fd592717cb81e7fba250fc0b739648bacbd6bf67b06f8ad91"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.786148 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.786321 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.804837 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.804880 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" event={"ID":"9128be3c-7611-4a51-b085-33b4019a0336","Type":"ContainerStarted","Data":"cc15e42ad2f0fd8e10acb0b069d001a4ef4ba6ce99004fcf2e750f7f51b85031"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.804899 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" event={"ID":"9128be3c-7611-4a51-b085-33b4019a0336","Type":"ContainerStarted","Data":"75180e102a2f44c1c27f59005ca2609b3afd675222b29c4c0b325b2b84da43ca"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.805486 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.806710 4704 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-24b8b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.806754 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" podUID="9128be3c-7611-4a51-b085-33b4019a0336" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.819008 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" event={"ID":"27ee8df2-66e3-4de7-a2c3-c0687e535125","Type":"ContainerStarted","Data":"5d82e0b1ef28746faf110a74da1e84ef9d0898b34a0b1c7ccb77d24ff375b367"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.819061 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" event={"ID":"27ee8df2-66e3-4de7-a2c3-c0687e535125","Type":"ContainerStarted","Data":"19bb09f266c90a8f0a0278efc36d4981e386594c0c776e144f1a1ec139a12d12"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.827095 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" event={"ID":"caa82913-e147-40d4-b5d6-c162427bbf32","Type":"ContainerStarted","Data":"c64f4dd73aec64c3ea925fd20043e7d2eae1955ff82fae6385e8c1ca69f233f4"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.830461 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.839269 4704 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jksvk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.839329 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" podUID="caa82913-e147-40d4-b5d6-c162427bbf32" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.839648 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.839979 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.339958726 +0000 UTC m=+152.984505426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.841918 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" event={"ID":"82f5fb5c-84e5-483b-9e21-5a7849856d41","Type":"ContainerStarted","Data":"e5b787c1cf8e1d3841d442878b4b39e08c4b1b8674ac07b4ed753c6cd4133c01"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.873978 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:30:59 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:30:59 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:30:59 crc kubenswrapper[4704]: healthz check failed Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.874030 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.882453 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" event={"ID":"4fc91464-d549-47b8-a428-605eaa51a21e","Type":"ContainerStarted","Data":"6511e232b13baa5a1a69cf8f8a05753b29c003d5f287fd283be471ecfaaeb16e"} Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.888205 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.894903 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tbg6j" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.914166 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57dhz" podStartSLOduration=133.914148637 podStartE2EDuration="2m13.914148637s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:59.840188162 +0000 UTC m=+152.484734852" watchObservedRunningTime="2026-01-22 16:30:59.914148637 +0000 UTC m=+152.558695337" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.941521 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:30:59 crc kubenswrapper[4704]: E0122 16:30:59.944192 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.444176739 +0000 UTC m=+153.088723439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.988445 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" podStartSLOduration=133.988425832 podStartE2EDuration="2m13.988425832s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:59.981768558 +0000 UTC m=+152.626315258" watchObservedRunningTime="2026-01-22 16:30:59.988425832 +0000 UTC m=+152.632972532" Jan 22 16:30:59 crc kubenswrapper[4704]: I0122 16:30:59.989058 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bk6h2" podStartSLOduration=133.989051728 podStartE2EDuration="2m13.989051728s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:30:59.939616501 +0000 UTC m=+152.584163201" watchObservedRunningTime="2026-01-22 16:30:59.989051728 +0000 UTC m=+152.633598428" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.042428 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.042729 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.542710286 +0000 UTC m=+153.187256986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.043184 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.072283 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.572263276 +0000 UTC m=+153.216809966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.077774 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kxvpl" podStartSLOduration=8.077752469 podStartE2EDuration="8.077752469s" podCreationTimestamp="2026-01-22 16:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:00.07358344 +0000 UTC m=+152.718130140" watchObservedRunningTime="2026-01-22 16:31:00.077752469 +0000 UTC m=+152.722299189" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.146516 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.146611 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.646594282 +0000 UTC m=+153.291140982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.146906 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.147161 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.647154756 +0000 UTC m=+153.291701456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.172199 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" podStartSLOduration=134.172184818 podStartE2EDuration="2m14.172184818s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:00.141787287 +0000 UTC m=+152.786333997" watchObservedRunningTime="2026-01-22 16:31:00.172184818 +0000 UTC m=+152.816731518" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.211806 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4p2x6" podStartSLOduration=134.21177799 podStartE2EDuration="2m14.21177799s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:00.173286647 +0000 UTC m=+152.817833347" watchObservedRunningTime="2026-01-22 16:31:00.21177799 +0000 UTC m=+152.856324690" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.249349 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.249618 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.749598145 +0000 UTC m=+153.394144845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.351483 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.351876 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.851862898 +0000 UTC m=+153.496409598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.385316 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" podStartSLOduration=134.385300239 podStartE2EDuration="2m14.385300239s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:00.287198924 +0000 UTC m=+152.931745624" watchObservedRunningTime="2026-01-22 16:31:00.385300239 +0000 UTC m=+153.029846939" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.385849 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w79nv" podStartSLOduration=134.385843373 podStartE2EDuration="2m14.385843373s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:00.38265367 +0000 UTC m=+153.027200390" watchObservedRunningTime="2026-01-22 16:31:00.385843373 +0000 UTC m=+153.030390073" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.452697 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.452877 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.952847999 +0000 UTC m=+153.597394699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.452939 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.453233 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:00.953221568 +0000 UTC m=+153.597768269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.480611 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rdsv" podStartSLOduration=134.480579201 podStartE2EDuration="2m14.480579201s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:00.441781881 +0000 UTC m=+153.086328581" watchObservedRunningTime="2026-01-22 16:31:00.480579201 +0000 UTC m=+153.125125901" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.554722 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.554944 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.054912907 +0000 UTC m=+153.699459607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.554984 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.555303 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.055289577 +0000 UTC m=+153.699836277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.621825 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kzftk" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.656435 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.656615 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.156585956 +0000 UTC m=+153.801132656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.656895 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.657263 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.157248673 +0000 UTC m=+153.801795383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.757820 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.758575 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.258549621 +0000 UTC m=+153.903096321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.859084 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.859479 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.35945807 +0000 UTC m=+154.004004830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.871373 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:31:00 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:31:00 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:31:00 crc kubenswrapper[4704]: healthz check failed Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.871437 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.894914 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" event={"ID":"65ebbe77-876f-45fd-8baf-2d375e7e1774","Type":"ContainerStarted","Data":"fdadc448ff786168bb38f9d291ca88acdaf8c925abd0d11c806391c6cdd37c14"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.894965 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" event={"ID":"65ebbe77-876f-45fd-8baf-2d375e7e1774","Type":"ContainerStarted","Data":"606fc51d417cf65ac686fb6d2a01e93bce065d4ffae0af9af16b77bbd3333e6b"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.897084 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" event={"ID":"fb8786f1-65c2-4086-9e36-b040560dcdd4","Type":"ContainerStarted","Data":"038822ca20dcad1c11b6e132239104fdceb0b2ce9f29e81f7ca6c880c948c581"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.898620 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" event={"ID":"c2d48829-9085-45ca-bf9c-cc90d68a94a3","Type":"ContainerStarted","Data":"4b6955639df188217f191f58cba88d72c0cf5c22403d88f1bf86b49c394311e0"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.898651 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" event={"ID":"c2d48829-9085-45ca-bf9c-cc90d68a94a3","Type":"ContainerStarted","Data":"78c9a49a81fa9726964f5f2447e4a146f42b44e868ba8a22974435863b337540"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.899108 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.900949 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" event={"ID":"cfdae196-c821-4f78-9191-890d25ca0e54","Type":"ContainerStarted","Data":"6b23a178e95e5e58e7a17e8c21ccb0cd940035168cdab121ef215d443af159b9"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.902868 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kxvpl" event={"ID":"9a6ce7e3-b982-4217-a49b-a0ce7e6a9f74","Type":"ContainerStarted","Data":"53e025a88961006589442dfeeb35d3671ce08708df3259ae48a34bdca20be710"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.904505 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f97bj" event={"ID":"b55af32b-969b-4bec-b0b4-49a1cacf5753","Type":"ContainerStarted","Data":"d350db3266c41e0d51da4eaefab3ce141ef12aaa1e6d3e65dba8e7104c366bca"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.906952 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" event={"ID":"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0","Type":"ContainerStarted","Data":"09025b084ad6fddb88c7a55490ac068eed21d2d9542612e260792d35da66fd03"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.906988 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" event={"ID":"8d0d5c5a-c1f7-4bc0-ad85-b4280f1f5fb0","Type":"ContainerStarted","Data":"047a492356c465dc501483876c5a6f9f859a75e5fc2083b21168bf4a88ada6dc"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.908647 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" event={"ID":"70d54766-7f56-4fbc-acf2-0193dc9bf8c1","Type":"ContainerStarted","Data":"a2a420f764f33aca37357bb9120719d6568f2b1be4e456c3ff230bc62469a4a4"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.910676 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" event={"ID":"caa82913-e147-40d4-b5d6-c162427bbf32","Type":"ContainerStarted","Data":"94b63833d0f4fa96f1df5b51d1c66fe52d92174002fb0128b65a4844d346c0f3"} Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.916921 4704 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lx7sw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.916973 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.956979 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-glvzp" podStartSLOduration=134.956957109 podStartE2EDuration="2m14.956957109s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:00.953980532 +0000 UTC m=+153.598527242" watchObservedRunningTime="2026-01-22 16:31:00.956957109 +0000 UTC m=+153.601503809" Jan 22 16:31:00 crc kubenswrapper[4704]: I0122 16:31:00.959823 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:00 crc kubenswrapper[4704]: E0122 16:31:00.960113 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.460098801 +0000 UTC m=+154.104645501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.005635 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-r5dtt" podStartSLOduration=135.005617647 podStartE2EDuration="2m15.005617647s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:01.002918207 +0000 UTC m=+153.647464907" watchObservedRunningTime="2026-01-22 16:31:01.005617647 +0000 UTC m=+153.650164347" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.016678 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jksvk" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.041763 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6kjm" podStartSLOduration=135.041745218 podStartE2EDuration="2m15.041745218s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:01.041131852 +0000 UTC m=+153.685678572" watchObservedRunningTime="2026-01-22 16:31:01.041745218 +0000 UTC m=+153.686291918" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.064813 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.068741 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.568723971 +0000 UTC m=+154.213270671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.081530 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q6ffh" podStartSLOduration=135.081507814 podStartE2EDuration="2m15.081507814s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:01.074158222 +0000 UTC m=+153.718704922" watchObservedRunningTime="2026-01-22 16:31:01.081507814 +0000 UTC m=+153.726054514" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.105360 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2qcrw" podStartSLOduration=135.105340854 podStartE2EDuration="2m15.105340854s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:01.101242988 +0000 UTC m=+153.745789688" watchObservedRunningTime="2026-01-22 16:31:01.105340854 +0000 UTC m=+153.749887554" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.142713 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kbfs9" podStartSLOduration=9.142692817 podStartE2EDuration="9.142692817s" podCreationTimestamp="2026-01-22 16:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:01.139387071 +0000 UTC m=+153.783933771" watchObservedRunningTime="2026-01-22 16:31:01.142692817 +0000 UTC m=+153.787239517" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.165890 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.166198 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.666184109 +0000 UTC m=+154.310730809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.191639 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" podStartSLOduration=135.191623942 podStartE2EDuration="2m15.191623942s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:01.188210373 +0000 UTC m=+153.832757073" watchObservedRunningTime="2026-01-22 16:31:01.191623942 +0000 UTC m=+153.836170642" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.268981 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.269881 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.76985759 +0000 UTC m=+154.414404290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.371113 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.371379 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.871339733 +0000 UTC m=+154.515886443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.372146 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.372569 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.872557675 +0000 UTC m=+154.517104545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.473524 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.473749 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.97371959 +0000 UTC m=+154.618266300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.473867 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.474127 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:01.97411505 +0000 UTC m=+154.618661750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.575015 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.575468 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.07545079 +0000 UTC m=+154.719997490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.676287 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.676760 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.176742638 +0000 UTC m=+154.821289338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.777260 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.777892 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.277873342 +0000 UTC m=+154.922420042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.869673 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:31:01 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:31:01 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:31:01 crc kubenswrapper[4704]: healthz check failed Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.869740 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.879064 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.879433 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.379417877 +0000 UTC m=+155.023964577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.912207 4704 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-24b8b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.912266 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" podUID="9128be3c-7611-4a51-b085-33b4019a0336" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.918557 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" event={"ID":"624441d9-c4a5-4642-b5fb-07b54e9f40e0","Type":"ContainerStarted","Data":"99dc6a355da60540b4443f610183eddf7e081ded4b6668f635f2b56ad038a741"} Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.980507 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.980651 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.480629683 +0000 UTC m=+155.125176383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:01 crc kubenswrapper[4704]: I0122 16:31:01.980846 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:01 crc kubenswrapper[4704]: E0122 16:31:01.981152 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.481141727 +0000 UTC m=+155.125688427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.065393 4704 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.082261 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.082477 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.582444045 +0000 UTC m=+155.226990745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.084671 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.085416 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.585400512 +0000 UTC m=+155.229947212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.186269 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.186677 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.68665375 +0000 UTC m=+155.331200450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.187022 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.187440 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.68743246 +0000 UTC m=+155.331979160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.289480 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.289941 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.789915539 +0000 UTC m=+155.434462239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.290366 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.290859 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.790851564 +0000 UTC m=+155.435398264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.394496 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.395053 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.895031457 +0000 UTC m=+155.539578157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.496377 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.496696 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:02.996684725 +0000 UTC m=+155.641231425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.597459 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.598288 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:03.098266091 +0000 UTC m=+155.742812801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.699809 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.700113 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:03.200095334 +0000 UTC m=+155.844642034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.763153 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4kgkm"] Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.764404 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.766073 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.778784 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4kgkm"] Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.839768 4704 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T16:31:02.065428402Z","Handler":null,"Name":""} Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.841437 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.841576 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 16:31:03.341553388 +0000 UTC m=+155.986100098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.841720 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-catalog-content\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.841829 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-utilities\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.841889 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6wd\" (UniqueName: \"kubernetes.io/projected/16980b70-91da-419b-b855-6a2551f62423-kube-api-access-lt6wd\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.841948 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:02 crc kubenswrapper[4704]: E0122 16:31:02.842305 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 16:31:03.342285757 +0000 UTC m=+155.986832467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xvsbg" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.867578 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:31:02 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:31:02 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:31:02 crc kubenswrapper[4704]: healthz check failed Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.867639 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.883573 4704 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.883614 4704 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.927116 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" event={"ID":"624441d9-c4a5-4642-b5fb-07b54e9f40e0","Type":"ContainerStarted","Data":"91ad154e990c03f3e69705b374c8e78905385a54c75d3a2fe2a8deb8183e67f8"} Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.927166 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" event={"ID":"624441d9-c4a5-4642-b5fb-07b54e9f40e0","Type":"ContainerStarted","Data":"283d61f55320914a14e4fa791701fef1fe1331c4946f6da6655f82905bbb7a19"} Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.943177 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.943411 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6wd\" (UniqueName: \"kubernetes.io/projected/16980b70-91da-419b-b855-6a2551f62423-kube-api-access-lt6wd\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.943531 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-catalog-content\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.943617 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-utilities\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.944102 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-utilities\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.944271 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-catalog-content\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.964375 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8qlsl"] Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.973183 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-whsbg" podStartSLOduration=10.973159366 podStartE2EDuration="10.973159366s" podCreationTimestamp="2026-01-22 16:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:02.970588049 +0000 UTC m=+155.615134769" watchObservedRunningTime="2026-01-22 16:31:02.973159366 +0000 UTC m=+155.617706066" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.976061 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.982149 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 16:31:02 crc kubenswrapper[4704]: I0122 16:31:02.989895 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qlsl"] Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.009740 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6wd\" (UniqueName: \"kubernetes.io/projected/16980b70-91da-419b-b855-6a2551f62423-kube-api-access-lt6wd\") pod \"community-operators-4kgkm\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.024686 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.043943 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-catalog-content\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.044007 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk6gd\" (UniqueName: \"kubernetes.io/projected/798305b7-a0da-49f9-904a-265e215f1fea-kube-api-access-wk6gd\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.044051 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.044112 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-utilities\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.051132 4704 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.051166 4704 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.076518 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xvsbg\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.080249 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.150896 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-utilities\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.151094 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-catalog-content\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.151127 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk6gd\" (UniqueName: \"kubernetes.io/projected/798305b7-a0da-49f9-904a-265e215f1fea-kube-api-access-wk6gd\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.151855 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-utilities\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.152159 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-catalog-content\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.166133 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ws5kw"] Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.167402 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.168511 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk6gd\" (UniqueName: \"kubernetes.io/projected/798305b7-a0da-49f9-904a-265e215f1fea-kube-api-access-wk6gd\") pod \"certified-operators-8qlsl\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.174098 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ws5kw"] Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.252216 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-catalog-content\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.252276 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-utilities\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.252375 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpx9n\" (UniqueName: \"kubernetes.io/projected/bd467440-1ed0-4085-b8d6-e4245de4ffda-kube-api-access-jpx9n\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.272839 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4kgkm"] Jan 22 16:31:03 crc kubenswrapper[4704]: W0122 16:31:03.278058 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16980b70_91da_419b_b855_6a2551f62423.slice/crio-5239e85c73fb66224a9000fc0f8c7e1537fd2d565fd1878c0112b89139367b80 WatchSource:0}: Error finding container 5239e85c73fb66224a9000fc0f8c7e1537fd2d565fd1878c0112b89139367b80: Status 404 returned error can't find the container with id 5239e85c73fb66224a9000fc0f8c7e1537fd2d565fd1878c0112b89139367b80 Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.294312 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.310266 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.352962 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpx9n\" (UniqueName: \"kubernetes.io/projected/bd467440-1ed0-4085-b8d6-e4245de4ffda-kube-api-access-jpx9n\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.353044 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-catalog-content\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.353074 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-utilities\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.353726 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-catalog-content\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.353778 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-utilities\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.357518 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5fhp"] Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.358942 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.369490 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5fhp"] Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.396065 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpx9n\" (UniqueName: \"kubernetes.io/projected/bd467440-1ed0-4085-b8d6-e4245de4ffda-kube-api-access-jpx9n\") pod \"community-operators-ws5kw\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.518416 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.543961 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qlsl"] Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.556299 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzjm5\" (UniqueName: \"kubernetes.io/projected/e85b5045-b0f3-49cd-97e4-a4c0688313e1-kube-api-access-nzjm5\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.556505 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-catalog-content\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.556576 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-utilities\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.578530 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xvsbg"] Jan 22 16:31:03 crc kubenswrapper[4704]: W0122 16:31:03.586291 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ded330b_1278_4aea_8eb7_711847e9a54e.slice/crio-509daafecc6b549e74fa1d923aeb1e8a97e389defa18136b2d46a7ddaa49b4e7 WatchSource:0}: Error finding container 509daafecc6b549e74fa1d923aeb1e8a97e389defa18136b2d46a7ddaa49b4e7: Status 404 returned error can't find the container with id 509daafecc6b549e74fa1d923aeb1e8a97e389defa18136b2d46a7ddaa49b4e7 Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.661293 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzjm5\" (UniqueName: \"kubernetes.io/projected/e85b5045-b0f3-49cd-97e4-a4c0688313e1-kube-api-access-nzjm5\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.661356 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-catalog-content\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.661419 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-utilities\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.662094 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-utilities\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.662839 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-catalog-content\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.667486 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.696484 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzjm5\" (UniqueName: \"kubernetes.io/projected/e85b5045-b0f3-49cd-97e4-a4c0688313e1-kube-api-access-nzjm5\") pod \"certified-operators-q5fhp\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.772043 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ws5kw"] Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.871739 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:31:03 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:31:03 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:31:03 crc kubenswrapper[4704]: healthz check failed Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.872027 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.933647 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" event={"ID":"6ded330b-1278-4aea-8eb7-711847e9a54e","Type":"ContainerStarted","Data":"a573e292f139e90dedf58db572cd3d04d932569566aace4c266733f4d8c9214f"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.933704 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" event={"ID":"6ded330b-1278-4aea-8eb7-711847e9a54e","Type":"ContainerStarted","Data":"509daafecc6b549e74fa1d923aeb1e8a97e389defa18136b2d46a7ddaa49b4e7"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.933769 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.935001 4704 generic.go:334] "Generic (PLEG): container finished" podID="888365e6-5672-42f7-ba73-de140fe8ea0a" containerID="0733fa15dc7129ffbaf47abf8a1f369d1ec11721281c0fee51dc6a993c68f2ad" exitCode=0 Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.935071 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" event={"ID":"888365e6-5672-42f7-ba73-de140fe8ea0a","Type":"ContainerDied","Data":"0733fa15dc7129ffbaf47abf8a1f369d1ec11721281c0fee51dc6a993c68f2ad"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.935872 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ws5kw" event={"ID":"bd467440-1ed0-4085-b8d6-e4245de4ffda","Type":"ContainerStarted","Data":"fcd207a9ea6a6a5c509f6f407b00c6101834593217b6cc69a611d165fa3bc011"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.937325 4704 generic.go:334] "Generic (PLEG): container finished" podID="798305b7-a0da-49f9-904a-265e215f1fea" containerID="89756bb79d0c08e07305dab603a0c4f5129c0878286271e2bbd77bae9f5ad541" exitCode=0 Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.937348 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlsl" event={"ID":"798305b7-a0da-49f9-904a-265e215f1fea","Type":"ContainerDied","Data":"89756bb79d0c08e07305dab603a0c4f5129c0878286271e2bbd77bae9f5ad541"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.937370 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlsl" event={"ID":"798305b7-a0da-49f9-904a-265e215f1fea","Type":"ContainerStarted","Data":"0f4826ebcbfb58a5bc84bb2987df1347967d07a03345bf435a95fa3374c9408f"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.938541 4704 generic.go:334] "Generic (PLEG): container finished" podID="16980b70-91da-419b-b855-6a2551f62423" containerID="bfb1d03b6f4171a4efa04ae01fe1a3253c631249b4d6b91fe8f4d3a612e5405a" exitCode=0 Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.938634 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kgkm" event={"ID":"16980b70-91da-419b-b855-6a2551f62423","Type":"ContainerDied","Data":"bfb1d03b6f4171a4efa04ae01fe1a3253c631249b4d6b91fe8f4d3a612e5405a"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.938681 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kgkm" event={"ID":"16980b70-91da-419b-b855-6a2551f62423","Type":"ContainerStarted","Data":"5239e85c73fb66224a9000fc0f8c7e1537fd2d565fd1878c0112b89139367b80"} Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.938969 4704 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.973974 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:03 crc kubenswrapper[4704]: I0122 16:31:03.974258 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" podStartSLOduration=137.974242691 podStartE2EDuration="2m17.974242691s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:03.952191986 +0000 UTC m=+156.596738686" watchObservedRunningTime="2026-01-22 16:31:03.974242691 +0000 UTC m=+156.618789391" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.235982 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5fhp"] Jan 22 16:31:04 crc kubenswrapper[4704]: W0122 16:31:04.244887 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85b5045_b0f3_49cd_97e4_a4c0688313e1.slice/crio-2109739d789d0ebde6df1419357c227bd552655969fbc6725ce6e7a9858190a8 WatchSource:0}: Error finding container 2109739d789d0ebde6df1419357c227bd552655969fbc6725ce6e7a9858190a8: Status 404 returned error can't find the container with id 2109739d789d0ebde6df1419357c227bd552655969fbc6725ce6e7a9858190a8 Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.539334 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.539827 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.541560 4704 patch_prober.go:28] interesting pod/console-f9d7485db-khgwd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.541619 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-khgwd" podUID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.591113 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.595714 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8v4fz" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.869198 4704 patch_prober.go:28] interesting pod/router-default-5444994796-gllz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 16:31:04 crc kubenswrapper[4704]: [-]has-synced failed: reason withheld Jan 22 16:31:04 crc kubenswrapper[4704]: [+]process-running ok Jan 22 16:31:04 crc kubenswrapper[4704]: healthz check failed Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.869468 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gllz9" podUID="278370ba-36fe-40ff-8719-19b42b0357be" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.912625 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2np4w" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.961942 4704 generic.go:334] "Generic (PLEG): container finished" podID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerID="f9411ae2c0e392f15822fb9bdf009a6840a4fccc03effaeaa1f27ad6a13b4e00" exitCode=0 Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.962023 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ws5kw" event={"ID":"bd467440-1ed0-4085-b8d6-e4245de4ffda","Type":"ContainerDied","Data":"f9411ae2c0e392f15822fb9bdf009a6840a4fccc03effaeaa1f27ad6a13b4e00"} Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.963995 4704 generic.go:334] "Generic (PLEG): container finished" podID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerID="33151e6b81a000da898ad64aab691219da4d84bb90d43832928459ffc89410b3" exitCode=0 Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.964527 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5fhp" event={"ID":"e85b5045-b0f3-49cd-97e4-a4c0688313e1","Type":"ContainerDied","Data":"33151e6b81a000da898ad64aab691219da4d84bb90d43832928459ffc89410b3"} Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.964557 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5fhp" event={"ID":"e85b5045-b0f3-49cd-97e4-a4c0688313e1","Type":"ContainerStarted","Data":"2109739d789d0ebde6df1419357c227bd552655969fbc6725ce6e7a9858190a8"} Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.977441 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrdrd"] Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.978618 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.980076 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 16:31:04 crc kubenswrapper[4704]: I0122 16:31:04.992003 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrdrd"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.080784 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.083232 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.083473 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsfpr\" (UniqueName: \"kubernetes.io/projected/137b8d6b-e852-4f81-992d-b5cc4b5ed519-kube-api-access-jsfpr\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.083531 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-utilities\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.083676 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-catalog-content\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.085436 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.086515 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.086895 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.186658 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.186713 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsfpr\" (UniqueName: \"kubernetes.io/projected/137b8d6b-e852-4f81-992d-b5cc4b5ed519-kube-api-access-jsfpr\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.186738 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-utilities\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.186781 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.186836 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-catalog-content\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.187238 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-catalog-content\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.187435 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-utilities\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.230608 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsfpr\" (UniqueName: \"kubernetes.io/projected/137b8d6b-e852-4f81-992d-b5cc4b5ed519-kube-api-access-jsfpr\") pod \"redhat-marketplace-vrdrd\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.288879 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.288977 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.289062 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.294072 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.307530 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.314863 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.356814 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8vsbg"] Jan 22 16:31:05 crc kubenswrapper[4704]: E0122 16:31:05.357039 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888365e6-5672-42f7-ba73-de140fe8ea0a" containerName="collect-profiles" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.357050 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="888365e6-5672-42f7-ba73-de140fe8ea0a" containerName="collect-profiles" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.357142 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="888365e6-5672-42f7-ba73-de140fe8ea0a" containerName="collect-profiles" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.357924 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.374034 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vsbg"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.391556 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh8tr\" (UniqueName: \"kubernetes.io/projected/888365e6-5672-42f7-ba73-de140fe8ea0a-kube-api-access-dh8tr\") pod \"888365e6-5672-42f7-ba73-de140fe8ea0a\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.391698 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/888365e6-5672-42f7-ba73-de140fe8ea0a-secret-volume\") pod \"888365e6-5672-42f7-ba73-de140fe8ea0a\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.391757 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/888365e6-5672-42f7-ba73-de140fe8ea0a-config-volume\") pod \"888365e6-5672-42f7-ba73-de140fe8ea0a\" (UID: \"888365e6-5672-42f7-ba73-de140fe8ea0a\") " Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.395520 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888365e6-5672-42f7-ba73-de140fe8ea0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "888365e6-5672-42f7-ba73-de140fe8ea0a" (UID: "888365e6-5672-42f7-ba73-de140fe8ea0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.400274 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888365e6-5672-42f7-ba73-de140fe8ea0a-kube-api-access-dh8tr" (OuterVolumeSpecName: "kube-api-access-dh8tr") pod "888365e6-5672-42f7-ba73-de140fe8ea0a" (UID: "888365e6-5672-42f7-ba73-de140fe8ea0a"). InnerVolumeSpecName "kube-api-access-dh8tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.406577 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888365e6-5672-42f7-ba73-de140fe8ea0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "888365e6-5672-42f7-ba73-de140fe8ea0a" (UID: "888365e6-5672-42f7-ba73-de140fe8ea0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.409407 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.493877 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jct6\" (UniqueName: \"kubernetes.io/projected/97a2a078-75ba-4e1b-b477-4c076b1be529-kube-api-access-5jct6\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.493939 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-catalog-content\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.493957 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-utilities\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.494065 4704 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/888365e6-5672-42f7-ba73-de140fe8ea0a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.494086 4704 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/888365e6-5672-42f7-ba73-de140fe8ea0a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.494096 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh8tr\" (UniqueName: \"kubernetes.io/projected/888365e6-5672-42f7-ba73-de140fe8ea0a-kube-api-access-dh8tr\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.524026 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrdrd"] Jan 22 16:31:05 crc kubenswrapper[4704]: W0122 16:31:05.533539 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137b8d6b_e852_4f81_992d_b5cc4b5ed519.slice/crio-67599c03fb2fc56807ef58390af90aa0b59ed4e87c97331f84b4678435e85ee5 WatchSource:0}: Error finding container 67599c03fb2fc56807ef58390af90aa0b59ed4e87c97331f84b4678435e85ee5: Status 404 returned error can't find the container with id 67599c03fb2fc56807ef58390af90aa0b59ed4e87c97331f84b4678435e85ee5 Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.596978 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-catalog-content\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.597041 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-utilities\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.597211 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jct6\" (UniqueName: \"kubernetes.io/projected/97a2a078-75ba-4e1b-b477-4c076b1be529-kube-api-access-5jct6\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.597747 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-utilities\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.600296 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-catalog-content\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.606216 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.644671 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jct6\" (UniqueName: \"kubernetes.io/projected/97a2a078-75ba-4e1b-b477-4c076b1be529-kube-api-access-5jct6\") pod \"redhat-marketplace-8vsbg\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.677501 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.730244 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.866950 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.869840 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.923933 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vsbg"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.933596 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-24b8b" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.961329 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-57zfj"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.963645 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.970342 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.988487 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57zfj"] Jan 22 16:31:05 crc kubenswrapper[4704]: I0122 16:31:05.989388 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2","Type":"ContainerStarted","Data":"8c7c2824a5272cfcfdbf3603487b9af53811b81e06b19c4b803ee1f93739501b"} Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.007904 4704 generic.go:334] "Generic (PLEG): container finished" podID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerID="12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0" exitCode=0 Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.008052 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrdrd" event={"ID":"137b8d6b-e852-4f81-992d-b5cc4b5ed519","Type":"ContainerDied","Data":"12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0"} Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.008084 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrdrd" event={"ID":"137b8d6b-e852-4f81-992d-b5cc4b5ed519","Type":"ContainerStarted","Data":"67599c03fb2fc56807ef58390af90aa0b59ed4e87c97331f84b4678435e85ee5"} Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.011612 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.011903 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484990-9qpp5" event={"ID":"888365e6-5672-42f7-ba73-de140fe8ea0a","Type":"ContainerDied","Data":"a1dcc0dadba05b0d76e689b0c7b2d6f7f069eeae74004a3fb08c18e1ec3b8d0d"} Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.012055 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1dcc0dadba05b0d76e689b0c7b2d6f7f069eeae74004a3fb08c18e1ec3b8d0d" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.013899 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vsbg" event={"ID":"97a2a078-75ba-4e1b-b477-4c076b1be529","Type":"ContainerStarted","Data":"5380147a69bc400d3db611ba3e8cb9e730c18f4e78655de4c4883c0b9f52069c"} Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.016817 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gllz9" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.104116 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xlk4\" (UniqueName: \"kubernetes.io/projected/d39c37f0-3471-4222-b3f0-b9947d334ef5-kube-api-access-2xlk4\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.104204 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-utilities\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.104296 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-catalog-content\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.205712 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xlk4\" (UniqueName: \"kubernetes.io/projected/d39c37f0-3471-4222-b3f0-b9947d334ef5-kube-api-access-2xlk4\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.205777 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-utilities\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.205940 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-catalog-content\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.206545 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-utilities\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.206656 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-catalog-content\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.238356 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xlk4\" (UniqueName: \"kubernetes.io/projected/d39c37f0-3471-4222-b3f0-b9947d334ef5-kube-api-access-2xlk4\") pod \"redhat-operators-57zfj\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.290205 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.359813 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m44hn"] Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.361140 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.381193 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m44hn"] Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.518676 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-catalog-content\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.518872 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvd5\" (UniqueName: \"kubernetes.io/projected/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-kube-api-access-7vvd5\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.518913 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-utilities\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.625914 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-utilities\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.626054 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-catalog-content\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.627930 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-utilities\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.628011 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-catalog-content\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.628107 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvd5\" (UniqueName: \"kubernetes.io/projected/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-kube-api-access-7vvd5\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.630672 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57zfj"] Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.653443 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvd5\" (UniqueName: \"kubernetes.io/projected/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-kube-api-access-7vvd5\") pod \"redhat-operators-m44hn\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:06 crc kubenswrapper[4704]: W0122 16:31:06.658199 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd39c37f0_3471_4222_b3f0_b9947d334ef5.slice/crio-6235916ba9149102a0631dae5384bae0d09d5a9dbbe8bce24953b202969f0889 WatchSource:0}: Error finding container 6235916ba9149102a0631dae5384bae0d09d5a9dbbe8bce24953b202969f0889: Status 404 returned error can't find the container with id 6235916ba9149102a0631dae5384bae0d09d5a9dbbe8bce24953b202969f0889 Jan 22 16:31:06 crc kubenswrapper[4704]: I0122 16:31:06.703966 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:07 crc kubenswrapper[4704]: I0122 16:31:07.005274 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m44hn"] Jan 22 16:31:07 crc kubenswrapper[4704]: W0122 16:31:07.017122 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09954e8e_8b14_4f6c_88b9_75cb8fac0f4c.slice/crio-e297b06606bebbbc84704af79ec03fae63860424fb953836ed9db34638b47213 WatchSource:0}: Error finding container e297b06606bebbbc84704af79ec03fae63860424fb953836ed9db34638b47213: Status 404 returned error can't find the container with id e297b06606bebbbc84704af79ec03fae63860424fb953836ed9db34638b47213 Jan 22 16:31:07 crc kubenswrapper[4704]: I0122 16:31:07.032760 4704 generic.go:334] "Generic (PLEG): container finished" podID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerID="64da1e28395c2e14079ffcaea7fcd4776598829837d36d4ecb560cc0cbe6058a" exitCode=0 Jan 22 16:31:07 crc kubenswrapper[4704]: I0122 16:31:07.032832 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vsbg" event={"ID":"97a2a078-75ba-4e1b-b477-4c076b1be529","Type":"ContainerDied","Data":"64da1e28395c2e14079ffcaea7fcd4776598829837d36d4ecb560cc0cbe6058a"} Jan 22 16:31:07 crc kubenswrapper[4704]: I0122 16:31:07.037191 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2","Type":"ContainerStarted","Data":"48f644c56d39ac67aceadb816ea2dc40351f6e0b2a98b846006e88d4c8c2cbbe"} Jan 22 16:31:07 crc kubenswrapper[4704]: I0122 16:31:07.057653 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57zfj" event={"ID":"d39c37f0-3471-4222-b3f0-b9947d334ef5","Type":"ContainerStarted","Data":"6235916ba9149102a0631dae5384bae0d09d5a9dbbe8bce24953b202969f0889"} Jan 22 16:31:07 crc kubenswrapper[4704]: I0122 16:31:07.072447 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.072432599 podStartE2EDuration="2.072432599s" podCreationTimestamp="2026-01-22 16:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:07.06975871 +0000 UTC m=+159.714305410" watchObservedRunningTime="2026-01-22 16:31:07.072432599 +0000 UTC m=+159.716979299" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.079846 4704 generic.go:334] "Generic (PLEG): container finished" podID="0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2" containerID="48f644c56d39ac67aceadb816ea2dc40351f6e0b2a98b846006e88d4c8c2cbbe" exitCode=0 Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.079936 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2","Type":"ContainerDied","Data":"48f644c56d39ac67aceadb816ea2dc40351f6e0b2a98b846006e88d4c8c2cbbe"} Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.083555 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57zfj" event={"ID":"d39c37f0-3471-4222-b3f0-b9947d334ef5","Type":"ContainerStarted","Data":"7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3"} Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.090718 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m44hn" event={"ID":"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c","Type":"ContainerStarted","Data":"e297b06606bebbbc84704af79ec03fae63860424fb953836ed9db34638b47213"} Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.453280 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.454314 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.456450 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.458436 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.464921 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.574889 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7d272f-cd99-4830-a711-85dc02219617-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.574951 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b7d272f-cd99-4830-a711-85dc02219617-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.676202 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7d272f-cd99-4830-a711-85dc02219617-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.676266 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b7d272f-cd99-4830-a711-85dc02219617-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.676337 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b7d272f-cd99-4830-a711-85dc02219617-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.697370 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7d272f-cd99-4830-a711-85dc02219617-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:08 crc kubenswrapper[4704]: I0122 16:31:08.776876 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.046732 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.102632 4704 generic.go:334] "Generic (PLEG): container finished" podID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerID="e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319" exitCode=0 Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.102704 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m44hn" event={"ID":"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c","Type":"ContainerDied","Data":"e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319"} Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.110396 4704 generic.go:334] "Generic (PLEG): container finished" podID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerID="7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3" exitCode=0 Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.110475 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57zfj" event={"ID":"d39c37f0-3471-4222-b3f0-b9947d334ef5","Type":"ContainerDied","Data":"7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3"} Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.112138 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8b7d272f-cd99-4830-a711-85dc02219617","Type":"ContainerStarted","Data":"8bffb6dbf6d8ee2180df1e5adc0106e48adefe4903fe85f88306a984e96f45d3"} Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.196751 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.204251 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/022e2512-8e2d-483f-a733-8681aad464a3-metrics-certs\") pod \"network-metrics-daemon-92rrv\" (UID: \"022e2512-8e2d-483f-a733-8681aad464a3\") " pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.325185 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.459115 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-92rrv" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.502004 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kubelet-dir\") pod \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.502114 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kube-api-access\") pod \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\" (UID: \"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2\") " Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.502457 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2" (UID: "0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.503563 4704 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.507391 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2" (UID: "0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.605462 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:09 crc kubenswrapper[4704]: I0122 16:31:09.904935 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-92rrv"] Jan 22 16:31:09 crc kubenswrapper[4704]: W0122 16:31:09.929158 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod022e2512_8e2d_483f_a733_8681aad464a3.slice/crio-6321d0f5499abd5c0b1d6d7e1f2a988921f57e3dabbcfe4e8794c1141eb53f82 WatchSource:0}: Error finding container 6321d0f5499abd5c0b1d6d7e1f2a988921f57e3dabbcfe4e8794c1141eb53f82: Status 404 returned error can't find the container with id 6321d0f5499abd5c0b1d6d7e1f2a988921f57e3dabbcfe4e8794c1141eb53f82 Jan 22 16:31:10 crc kubenswrapper[4704]: I0122 16:31:10.122674 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2","Type":"ContainerDied","Data":"8c7c2824a5272cfcfdbf3603487b9af53811b81e06b19c4b803ee1f93739501b"} Jan 22 16:31:10 crc kubenswrapper[4704]: I0122 16:31:10.122716 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7c2824a5272cfcfdbf3603487b9af53811b81e06b19c4b803ee1f93739501b" Jan 22 16:31:10 crc kubenswrapper[4704]: I0122 16:31:10.122849 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 16:31:10 crc kubenswrapper[4704]: I0122 16:31:10.125453 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-92rrv" event={"ID":"022e2512-8e2d-483f-a733-8681aad464a3","Type":"ContainerStarted","Data":"6321d0f5499abd5c0b1d6d7e1f2a988921f57e3dabbcfe4e8794c1141eb53f82"} Jan 22 16:31:10 crc kubenswrapper[4704]: I0122 16:31:10.657988 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kxvpl" Jan 22 16:31:11 crc kubenswrapper[4704]: I0122 16:31:11.148091 4704 generic.go:334] "Generic (PLEG): container finished" podID="8b7d272f-cd99-4830-a711-85dc02219617" containerID="a0e76242cf466cfc06b1212813ffac07aba6081fd99acd255a2596dc3dcf2ffe" exitCode=0 Jan 22 16:31:11 crc kubenswrapper[4704]: I0122 16:31:11.148359 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8b7d272f-cd99-4830-a711-85dc02219617","Type":"ContainerDied","Data":"a0e76242cf466cfc06b1212813ffac07aba6081fd99acd255a2596dc3dcf2ffe"} Jan 22 16:31:11 crc kubenswrapper[4704]: I0122 16:31:11.154843 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-92rrv" event={"ID":"022e2512-8e2d-483f-a733-8681aad464a3","Type":"ContainerStarted","Data":"bde554fc54a6e2e64eeb6e5355c0200ffca1802eb937c3edd2bdf755ac137405"} Jan 22 16:31:12 crc kubenswrapper[4704]: I0122 16:31:12.164860 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-92rrv" event={"ID":"022e2512-8e2d-483f-a733-8681aad464a3","Type":"ContainerStarted","Data":"4a5f76b4288a60377376a68d16630c36439ee3c853f4db0ddb8e5a76dc9de61f"} Jan 22 16:31:14 crc kubenswrapper[4704]: I0122 16:31:14.538757 4704 patch_prober.go:28] interesting pod/console-f9d7485db-khgwd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 22 16:31:14 crc kubenswrapper[4704]: I0122 16:31:14.538832 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-khgwd" podUID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.414973 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.440877 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-92rrv" podStartSLOduration=149.440854052 podStartE2EDuration="2m29.440854052s" podCreationTimestamp="2026-01-22 16:28:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:13.187463928 +0000 UTC m=+165.832010668" watchObservedRunningTime="2026-01-22 16:31:15.440854052 +0000 UTC m=+168.085400752" Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.590131 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7d272f-cd99-4830-a711-85dc02219617-kube-api-access\") pod \"8b7d272f-cd99-4830-a711-85dc02219617\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.591361 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b7d272f-cd99-4830-a711-85dc02219617-kubelet-dir\") pod \"8b7d272f-cd99-4830-a711-85dc02219617\" (UID: \"8b7d272f-cd99-4830-a711-85dc02219617\") " Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.591458 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b7d272f-cd99-4830-a711-85dc02219617-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b7d272f-cd99-4830-a711-85dc02219617" (UID: "8b7d272f-cd99-4830-a711-85dc02219617"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.591664 4704 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b7d272f-cd99-4830-a711-85dc02219617-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.595364 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b7d272f-cd99-4830-a711-85dc02219617-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b7d272f-cd99-4830-a711-85dc02219617" (UID: "8b7d272f-cd99-4830-a711-85dc02219617"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:31:15 crc kubenswrapper[4704]: I0122 16:31:15.693280 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b7d272f-cd99-4830-a711-85dc02219617-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:16 crc kubenswrapper[4704]: I0122 16:31:16.192333 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8b7d272f-cd99-4830-a711-85dc02219617","Type":"ContainerDied","Data":"8bffb6dbf6d8ee2180df1e5adc0106e48adefe4903fe85f88306a984e96f45d3"} Jan 22 16:31:16 crc kubenswrapper[4704]: I0122 16:31:16.192838 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bffb6dbf6d8ee2180df1e5adc0106e48adefe4903fe85f88306a984e96f45d3" Jan 22 16:31:16 crc kubenswrapper[4704]: I0122 16:31:16.192407 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 16:31:18 crc kubenswrapper[4704]: I0122 16:31:18.205545 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:31:19 crc kubenswrapper[4704]: I0122 16:31:19.086504 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:31:19 crc kubenswrapper[4704]: I0122 16:31:19.087082 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:31:23 crc kubenswrapper[4704]: I0122 16:31:23.316168 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:31:24 crc kubenswrapper[4704]: I0122 16:31:24.543184 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:31:24 crc kubenswrapper[4704]: I0122 16:31:24.548157 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:31:33 crc kubenswrapper[4704]: E0122 16:31:33.825363 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 16:31:33 crc kubenswrapper[4704]: E0122 16:31:33.826059 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpx9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ws5kw_openshift-marketplace(bd467440-1ed0-4085-b8d6-e4245de4ffda): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:31:33 crc kubenswrapper[4704]: E0122 16:31:33.827228 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ws5kw" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" Jan 22 16:31:33 crc kubenswrapper[4704]: I0122 16:31:33.871441 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 16:31:35 crc kubenswrapper[4704]: E0122 16:31:35.769417 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ws5kw" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" Jan 22 16:31:35 crc kubenswrapper[4704]: E0122 16:31:35.831398 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 16:31:35 crc kubenswrapper[4704]: E0122 16:31:35.831583 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt6wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4kgkm_openshift-marketplace(16980b70-91da-419b-b855-6a2551f62423): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:31:35 crc kubenswrapper[4704]: E0122 16:31:35.832827 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4kgkm" podUID="16980b70-91da-419b-b855-6a2551f62423" Jan 22 16:31:35 crc kubenswrapper[4704]: I0122 16:31:35.929993 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j286m" Jan 22 16:31:37 crc kubenswrapper[4704]: E0122 16:31:37.053583 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4kgkm" podUID="16980b70-91da-419b-b855-6a2551f62423" Jan 22 16:31:37 crc kubenswrapper[4704]: E0122 16:31:37.069129 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 16:31:37 crc kubenswrapper[4704]: E0122 16:31:37.069279 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jct6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8vsbg_openshift-marketplace(97a2a078-75ba-4e1b-b477-4c076b1be529): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:31:37 crc kubenswrapper[4704]: E0122 16:31:37.069619 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 16:31:37 crc kubenswrapper[4704]: E0122 16:31:37.069912 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsfpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vrdrd_openshift-marketplace(137b8d6b-e852-4f81-992d-b5cc4b5ed519): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:31:37 crc kubenswrapper[4704]: E0122 16:31:37.070359 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8vsbg" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" Jan 22 16:31:37 crc kubenswrapper[4704]: E0122 16:31:37.071803 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vrdrd" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" Jan 22 16:31:40 crc kubenswrapper[4704]: E0122 16:31:40.019826 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vrdrd" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" Jan 22 16:31:40 crc kubenswrapper[4704]: E0122 16:31:40.019976 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8vsbg" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" Jan 22 16:31:40 crc kubenswrapper[4704]: E0122 16:31:40.084548 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 16:31:40 crc kubenswrapper[4704]: E0122 16:31:40.084708 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xlk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-57zfj_openshift-marketplace(d39c37f0-3471-4222-b3f0-b9947d334ef5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:31:40 crc kubenswrapper[4704]: E0122 16:31:40.086188 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-57zfj" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" Jan 22 16:31:40 crc kubenswrapper[4704]: I0122 16:31:40.344548 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m44hn" event={"ID":"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c","Type":"ContainerStarted","Data":"c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07"} Jan 22 16:31:40 crc kubenswrapper[4704]: I0122 16:31:40.350178 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5fhp" event={"ID":"e85b5045-b0f3-49cd-97e4-a4c0688313e1","Type":"ContainerStarted","Data":"c1b05bd8cde56421c1f3fc4312495394fbd48b60661a659c408cf9f93e1f8395"} Jan 22 16:31:40 crc kubenswrapper[4704]: I0122 16:31:40.352930 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlsl" event={"ID":"798305b7-a0da-49f9-904a-265e215f1fea","Type":"ContainerStarted","Data":"7b7903df8c0314805afdd70a19a9a3175d02f9d70afdc58d4dc886c594447b63"} Jan 22 16:31:40 crc kubenswrapper[4704]: E0122 16:31:40.354699 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-57zfj" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.357516 4704 generic.go:334] "Generic (PLEG): container finished" podID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerID="c1b05bd8cde56421c1f3fc4312495394fbd48b60661a659c408cf9f93e1f8395" exitCode=0 Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.357846 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5fhp" event={"ID":"e85b5045-b0f3-49cd-97e4-a4c0688313e1","Type":"ContainerDied","Data":"c1b05bd8cde56421c1f3fc4312495394fbd48b60661a659c408cf9f93e1f8395"} Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.361060 4704 generic.go:334] "Generic (PLEG): container finished" podID="798305b7-a0da-49f9-904a-265e215f1fea" containerID="7b7903df8c0314805afdd70a19a9a3175d02f9d70afdc58d4dc886c594447b63" exitCode=0 Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.361116 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlsl" event={"ID":"798305b7-a0da-49f9-904a-265e215f1fea","Type":"ContainerDied","Data":"7b7903df8c0314805afdd70a19a9a3175d02f9d70afdc58d4dc886c594447b63"} Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.366455 4704 generic.go:334] "Generic (PLEG): container finished" podID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerID="c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07" exitCode=0 Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.366518 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m44hn" event={"ID":"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c","Type":"ContainerDied","Data":"c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07"} Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.651295 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 16:31:41 crc kubenswrapper[4704]: E0122 16:31:41.651741 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7d272f-cd99-4830-a711-85dc02219617" containerName="pruner" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.651757 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7d272f-cd99-4830-a711-85dc02219617" containerName="pruner" Jan 22 16:31:41 crc kubenswrapper[4704]: E0122 16:31:41.651780 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2" containerName="pruner" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.651803 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2" containerName="pruner" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.651927 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd68e80-1594-4d0d-93aa-d90a9eb0a1a2" containerName="pruner" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.651942 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b7d272f-cd99-4830-a711-85dc02219617" containerName="pruner" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.653330 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.672716 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.672819 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.679229 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.745714 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.745950 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.847266 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.847378 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.847463 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.865531 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:41 crc kubenswrapper[4704]: I0122 16:31:41.991202 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:42 crc kubenswrapper[4704]: I0122 16:31:42.373518 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m44hn" event={"ID":"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c","Type":"ContainerStarted","Data":"58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd"} Jan 22 16:31:42 crc kubenswrapper[4704]: I0122 16:31:42.376285 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5fhp" event={"ID":"e85b5045-b0f3-49cd-97e4-a4c0688313e1","Type":"ContainerStarted","Data":"788170eef95fd0ebe52c19196912857df4b72ed1cf0508496b0128bc67023cc1"} Jan 22 16:31:42 crc kubenswrapper[4704]: I0122 16:31:42.378467 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlsl" event={"ID":"798305b7-a0da-49f9-904a-265e215f1fea","Type":"ContainerStarted","Data":"9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58"} Jan 22 16:31:42 crc kubenswrapper[4704]: I0122 16:31:42.395761 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m44hn" podStartSLOduration=3.376812242 podStartE2EDuration="36.395746576s" podCreationTimestamp="2026-01-22 16:31:06 +0000 UTC" firstStartedPulling="2026-01-22 16:31:09.103956335 +0000 UTC m=+161.748503035" lastFinishedPulling="2026-01-22 16:31:42.122890669 +0000 UTC m=+194.767437369" observedRunningTime="2026-01-22 16:31:42.393675712 +0000 UTC m=+195.038222412" watchObservedRunningTime="2026-01-22 16:31:42.395746576 +0000 UTC m=+195.040293276" Jan 22 16:31:42 crc kubenswrapper[4704]: I0122 16:31:42.412429 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 16:31:42 crc kubenswrapper[4704]: I0122 16:31:42.420385 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8qlsl" podStartSLOduration=2.376423316 podStartE2EDuration="40.420364557s" podCreationTimestamp="2026-01-22 16:31:02 +0000 UTC" firstStartedPulling="2026-01-22 16:31:03.938712605 +0000 UTC m=+156.583259305" lastFinishedPulling="2026-01-22 16:31:41.982653846 +0000 UTC m=+194.627200546" observedRunningTime="2026-01-22 16:31:42.418109838 +0000 UTC m=+195.062656538" watchObservedRunningTime="2026-01-22 16:31:42.420364557 +0000 UTC m=+195.064911257" Jan 22 16:31:42 crc kubenswrapper[4704]: W0122 16:31:42.423607 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod760c7f69_7c8d_4378_ba5f_eebc7f130e8e.slice/crio-f1c7ae7871fb8a3a9ccfd032d5d36f087e891ad2a8b7549131806303fa38f5a1 WatchSource:0}: Error finding container f1c7ae7871fb8a3a9ccfd032d5d36f087e891ad2a8b7549131806303fa38f5a1: Status 404 returned error can't find the container with id f1c7ae7871fb8a3a9ccfd032d5d36f087e891ad2a8b7549131806303fa38f5a1 Jan 22 16:31:42 crc kubenswrapper[4704]: I0122 16:31:42.436766 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5fhp" podStartSLOduration=2.347783418 podStartE2EDuration="39.436747004s" podCreationTimestamp="2026-01-22 16:31:03 +0000 UTC" firstStartedPulling="2026-01-22 16:31:04.966396654 +0000 UTC m=+157.610943354" lastFinishedPulling="2026-01-22 16:31:42.05536022 +0000 UTC m=+194.699906940" observedRunningTime="2026-01-22 16:31:42.436188209 +0000 UTC m=+195.080734929" watchObservedRunningTime="2026-01-22 16:31:42.436747004 +0000 UTC m=+195.081293704" Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.294951 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.295285 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.386131 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"760c7f69-7c8d-4378-ba5f-eebc7f130e8e","Type":"ContainerStarted","Data":"3b67ad87434b7d4c997195d6724124ab68017acff642a005a41d5420a3f7602b"} Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.386188 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"760c7f69-7c8d-4378-ba5f-eebc7f130e8e","Type":"ContainerStarted","Data":"f1c7ae7871fb8a3a9ccfd032d5d36f087e891ad2a8b7549131806303fa38f5a1"} Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.406312 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.406295547 podStartE2EDuration="2.406295547s" podCreationTimestamp="2026-01-22 16:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:43.403273608 +0000 UTC m=+196.047820318" watchObservedRunningTime="2026-01-22 16:31:43.406295547 +0000 UTC m=+196.050842247" Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.594301 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l6zs2"] Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.974627 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:43 crc kubenswrapper[4704]: I0122 16:31:43.975020 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:44 crc kubenswrapper[4704]: I0122 16:31:44.012466 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:44 crc kubenswrapper[4704]: I0122 16:31:44.351388 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8qlsl" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="registry-server" probeResult="failure" output=< Jan 22 16:31:44 crc kubenswrapper[4704]: timeout: failed to connect service ":50051" within 1s Jan 22 16:31:44 crc kubenswrapper[4704]: > Jan 22 16:31:44 crc kubenswrapper[4704]: I0122 16:31:44.391384 4704 generic.go:334] "Generic (PLEG): container finished" podID="760c7f69-7c8d-4378-ba5f-eebc7f130e8e" containerID="3b67ad87434b7d4c997195d6724124ab68017acff642a005a41d5420a3f7602b" exitCode=0 Jan 22 16:31:44 crc kubenswrapper[4704]: I0122 16:31:44.391421 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"760c7f69-7c8d-4378-ba5f-eebc7f130e8e","Type":"ContainerDied","Data":"3b67ad87434b7d4c997195d6724124ab68017acff642a005a41d5420a3f7602b"} Jan 22 16:31:45 crc kubenswrapper[4704]: I0122 16:31:45.622052 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:45 crc kubenswrapper[4704]: I0122 16:31:45.707755 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kube-api-access\") pod \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " Jan 22 16:31:45 crc kubenswrapper[4704]: I0122 16:31:45.707821 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kubelet-dir\") pod \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\" (UID: \"760c7f69-7c8d-4378-ba5f-eebc7f130e8e\") " Jan 22 16:31:45 crc kubenswrapper[4704]: I0122 16:31:45.707996 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "760c7f69-7c8d-4378-ba5f-eebc7f130e8e" (UID: "760c7f69-7c8d-4378-ba5f-eebc7f130e8e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:31:45 crc kubenswrapper[4704]: I0122 16:31:45.708178 4704 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:45 crc kubenswrapper[4704]: I0122 16:31:45.721026 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "760c7f69-7c8d-4378-ba5f-eebc7f130e8e" (UID: "760c7f69-7c8d-4378-ba5f-eebc7f130e8e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:31:45 crc kubenswrapper[4704]: I0122 16:31:45.809763 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/760c7f69-7c8d-4378-ba5f-eebc7f130e8e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.401990 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"760c7f69-7c8d-4378-ba5f-eebc7f130e8e","Type":"ContainerDied","Data":"f1c7ae7871fb8a3a9ccfd032d5d36f087e891ad2a8b7549131806303fa38f5a1"} Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.402045 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c7ae7871fb8a3a9ccfd032d5d36f087e891ad2a8b7549131806303fa38f5a1" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.402130 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.455032 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 16:31:46 crc kubenswrapper[4704]: E0122 16:31:46.455353 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760c7f69-7c8d-4378-ba5f-eebc7f130e8e" containerName="pruner" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.455366 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="760c7f69-7c8d-4378-ba5f-eebc7f130e8e" containerName="pruner" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.455479 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="760c7f69-7c8d-4378-ba5f-eebc7f130e8e" containerName="pruner" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.455931 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.458484 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.459897 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.462002 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.622970 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-var-lock\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.623068 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kube-api-access\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.623138 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.705536 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.705700 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.725048 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kube-api-access\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.725152 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.725234 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-var-lock\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.725351 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-var-lock\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.725487 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.746498 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kube-api-access\") pod \"installer-9-crc\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:46 crc kubenswrapper[4704]: I0122 16:31:46.782599 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:31:47 crc kubenswrapper[4704]: I0122 16:31:47.185139 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 16:31:47 crc kubenswrapper[4704]: I0122 16:31:47.410064 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07f8c1e2-21b3-4c4a-a235-8a5bc193719c","Type":"ContainerStarted","Data":"7ebca18959f30df16c2446f09698249b99dc1bd676e6c34e9cde909d415273a6"} Jan 22 16:31:47 crc kubenswrapper[4704]: I0122 16:31:47.753050 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m44hn" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="registry-server" probeResult="failure" output=< Jan 22 16:31:47 crc kubenswrapper[4704]: timeout: failed to connect service ":50051" within 1s Jan 22 16:31:47 crc kubenswrapper[4704]: > Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.086913 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.086977 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.087021 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.087590 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.087691 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3" gracePeriod=600 Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.422869 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07f8c1e2-21b3-4c4a-a235-8a5bc193719c","Type":"ContainerStarted","Data":"2fa5d21d56510c86cbc2948f633d8ced4691d38ba87f4301637dc7b54fffa575"} Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.424860 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3" exitCode=0 Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.424894 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3"} Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.424909 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"472b8c837b02223b278946b3b749c037d005e52a819017280faf01387d829462"} Jan 22 16:31:49 crc kubenswrapper[4704]: I0122 16:31:49.438221 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.43818567 podStartE2EDuration="3.43818567s" podCreationTimestamp="2026-01-22 16:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:31:49.435432649 +0000 UTC m=+202.079979359" watchObservedRunningTime="2026-01-22 16:31:49.43818567 +0000 UTC m=+202.082732370" Jan 22 16:31:50 crc kubenswrapper[4704]: I0122 16:31:50.432705 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ws5kw" event={"ID":"bd467440-1ed0-4085-b8d6-e4245de4ffda","Type":"ContainerStarted","Data":"d103572b33c74328e49718b7ce4203979529dff095e9bc8a75a826265b7e8691"} Jan 22 16:31:51 crc kubenswrapper[4704]: I0122 16:31:51.440883 4704 generic.go:334] "Generic (PLEG): container finished" podID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerID="d103572b33c74328e49718b7ce4203979529dff095e9bc8a75a826265b7e8691" exitCode=0 Jan 22 16:31:51 crc kubenswrapper[4704]: I0122 16:31:51.440931 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ws5kw" event={"ID":"bd467440-1ed0-4085-b8d6-e4245de4ffda","Type":"ContainerDied","Data":"d103572b33c74328e49718b7ce4203979529dff095e9bc8a75a826265b7e8691"} Jan 22 16:31:53 crc kubenswrapper[4704]: I0122 16:31:53.340135 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:53 crc kubenswrapper[4704]: I0122 16:31:53.377911 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:31:54 crc kubenswrapper[4704]: I0122 16:31:54.015008 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:55 crc kubenswrapper[4704]: I0122 16:31:55.861846 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5fhp"] Jan 22 16:31:55 crc kubenswrapper[4704]: I0122 16:31:55.862145 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q5fhp" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="registry-server" containerID="cri-o://788170eef95fd0ebe52c19196912857df4b72ed1cf0508496b0128bc67023cc1" gracePeriod=2 Jan 22 16:31:56 crc kubenswrapper[4704]: I0122 16:31:56.749609 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:56 crc kubenswrapper[4704]: I0122 16:31:56.795427 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.475286 4704 generic.go:334] "Generic (PLEG): container finished" podID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerID="788170eef95fd0ebe52c19196912857df4b72ed1cf0508496b0128bc67023cc1" exitCode=0 Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.475448 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5fhp" event={"ID":"e85b5045-b0f3-49cd-97e4-a4c0688313e1","Type":"ContainerDied","Data":"788170eef95fd0ebe52c19196912857df4b72ed1cf0508496b0128bc67023cc1"} Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.475698 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5fhp" event={"ID":"e85b5045-b0f3-49cd-97e4-a4c0688313e1","Type":"ContainerDied","Data":"2109739d789d0ebde6df1419357c227bd552655969fbc6725ce6e7a9858190a8"} Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.475726 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2109739d789d0ebde6df1419357c227bd552655969fbc6725ce6e7a9858190a8" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.491632 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.660187 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-catalog-content\") pod \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.660849 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzjm5\" (UniqueName: \"kubernetes.io/projected/e85b5045-b0f3-49cd-97e4-a4c0688313e1-kube-api-access-nzjm5\") pod \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.661015 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-utilities\") pod \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\" (UID: \"e85b5045-b0f3-49cd-97e4-a4c0688313e1\") " Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.661678 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-utilities" (OuterVolumeSpecName: "utilities") pod "e85b5045-b0f3-49cd-97e4-a4c0688313e1" (UID: "e85b5045-b0f3-49cd-97e4-a4c0688313e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.669780 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85b5045-b0f3-49cd-97e4-a4c0688313e1-kube-api-access-nzjm5" (OuterVolumeSpecName: "kube-api-access-nzjm5") pod "e85b5045-b0f3-49cd-97e4-a4c0688313e1" (UID: "e85b5045-b0f3-49cd-97e4-a4c0688313e1"). InnerVolumeSpecName "kube-api-access-nzjm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.730627 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e85b5045-b0f3-49cd-97e4-a4c0688313e1" (UID: "e85b5045-b0f3-49cd-97e4-a4c0688313e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.762912 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.762952 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzjm5\" (UniqueName: \"kubernetes.io/projected/e85b5045-b0f3-49cd-97e4-a4c0688313e1-kube-api-access-nzjm5\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:57 crc kubenswrapper[4704]: I0122 16:31:57.762964 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85b5045-b0f3-49cd-97e4-a4c0688313e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.483954 4704 generic.go:334] "Generic (PLEG): container finished" podID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerID="919d4ca51c3938d14c876acfcfef216c4068f09603e25b8f119845c1d2d5bb53" exitCode=0 Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.484040 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vsbg" event={"ID":"97a2a078-75ba-4e1b-b477-4c076b1be529","Type":"ContainerDied","Data":"919d4ca51c3938d14c876acfcfef216c4068f09603e25b8f119845c1d2d5bb53"} Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.487152 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ws5kw" event={"ID":"bd467440-1ed0-4085-b8d6-e4245de4ffda","Type":"ContainerStarted","Data":"408c700fd5d516d811885e848d436d86d8b2b25bda68dfe829156de6f42b989c"} Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.490076 4704 generic.go:334] "Generic (PLEG): container finished" podID="16980b70-91da-419b-b855-6a2551f62423" containerID="c340af381901978caf447bf2db61ecda2dd7ef72676196cd2f53a6a56e51306f" exitCode=0 Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.490102 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kgkm" event={"ID":"16980b70-91da-419b-b855-6a2551f62423","Type":"ContainerDied","Data":"c340af381901978caf447bf2db61ecda2dd7ef72676196cd2f53a6a56e51306f"} Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.492336 4704 generic.go:334] "Generic (PLEG): container finished" podID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerID="e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb" exitCode=0 Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.492392 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrdrd" event={"ID":"137b8d6b-e852-4f81-992d-b5cc4b5ed519","Type":"ContainerDied","Data":"e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb"} Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.495821 4704 generic.go:334] "Generic (PLEG): container finished" podID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerID="8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3" exitCode=0 Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.495932 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5fhp" Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.496571 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57zfj" event={"ID":"d39c37f0-3471-4222-b3f0-b9947d334ef5","Type":"ContainerDied","Data":"8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3"} Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.524542 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ws5kw" podStartSLOduration=3.150369491 podStartE2EDuration="55.524526422s" podCreationTimestamp="2026-01-22 16:31:03 +0000 UTC" firstStartedPulling="2026-01-22 16:31:04.964013822 +0000 UTC m=+157.608560522" lastFinishedPulling="2026-01-22 16:31:57.338170743 +0000 UTC m=+209.982717453" observedRunningTime="2026-01-22 16:31:58.523060039 +0000 UTC m=+211.167606759" watchObservedRunningTime="2026-01-22 16:31:58.524526422 +0000 UTC m=+211.169073122" Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.590326 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5fhp"] Jan 22 16:31:58 crc kubenswrapper[4704]: I0122 16:31:58.594379 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q5fhp"] Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.502532 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kgkm" event={"ID":"16980b70-91da-419b-b855-6a2551f62423","Type":"ContainerStarted","Data":"227df9cadaca59a33153bb852b88588d4c533eb00e3755842b3dc9f32ac3658d"} Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.505428 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrdrd" event={"ID":"137b8d6b-e852-4f81-992d-b5cc4b5ed519","Type":"ContainerStarted","Data":"91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b"} Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.508240 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57zfj" event={"ID":"d39c37f0-3471-4222-b3f0-b9947d334ef5","Type":"ContainerStarted","Data":"c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e"} Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.510280 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vsbg" event={"ID":"97a2a078-75ba-4e1b-b477-4c076b1be529","Type":"ContainerStarted","Data":"f25b65a576d88b391238612853c5ab8a3236a12a765590ea31572bbfbc003914"} Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.560601 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-57zfj" podStartSLOduration=4.685432001 podStartE2EDuration="54.560580218s" podCreationTimestamp="2026-01-22 16:31:05 +0000 UTC" firstStartedPulling="2026-01-22 16:31:09.111806039 +0000 UTC m=+161.756352739" lastFinishedPulling="2026-01-22 16:31:58.986954256 +0000 UTC m=+211.631500956" observedRunningTime="2026-01-22 16:31:59.558861337 +0000 UTC m=+212.203408057" watchObservedRunningTime="2026-01-22 16:31:59.560580218 +0000 UTC m=+212.205126918" Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.563535 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4kgkm" podStartSLOduration=2.645703057 podStartE2EDuration="57.563521475s" podCreationTimestamp="2026-01-22 16:31:02 +0000 UTC" firstStartedPulling="2026-01-22 16:31:03.939641449 +0000 UTC m=+156.584188149" lastFinishedPulling="2026-01-22 16:31:58.857459867 +0000 UTC m=+211.502006567" observedRunningTime="2026-01-22 16:31:59.525192891 +0000 UTC m=+212.169739591" watchObservedRunningTime="2026-01-22 16:31:59.563521475 +0000 UTC m=+212.208068175" Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.581536 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8vsbg" podStartSLOduration=2.745495769 podStartE2EDuration="54.581514927s" podCreationTimestamp="2026-01-22 16:31:05 +0000 UTC" firstStartedPulling="2026-01-22 16:31:07.070251012 +0000 UTC m=+159.714797712" lastFinishedPulling="2026-01-22 16:31:58.90627017 +0000 UTC m=+211.550816870" observedRunningTime="2026-01-22 16:31:59.579339312 +0000 UTC m=+212.223886012" watchObservedRunningTime="2026-01-22 16:31:59.581514927 +0000 UTC m=+212.226061627" Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.600686 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrdrd" podStartSLOduration=2.6764312070000003 podStartE2EDuration="55.600666463s" podCreationTimestamp="2026-01-22 16:31:04 +0000 UTC" firstStartedPulling="2026-01-22 16:31:06.042579625 +0000 UTC m=+158.687126325" lastFinishedPulling="2026-01-22 16:31:58.966814881 +0000 UTC m=+211.611361581" observedRunningTime="2026-01-22 16:31:59.596549831 +0000 UTC m=+212.241096551" watchObservedRunningTime="2026-01-22 16:31:59.600666463 +0000 UTC m=+212.245213163" Jan 22 16:31:59 crc kubenswrapper[4704]: I0122 16:31:59.642228 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" path="/var/lib/kubelet/pods/e85b5045-b0f3-49cd-97e4-a4c0688313e1/volumes" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.065269 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m44hn"] Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.065726 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m44hn" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="registry-server" containerID="cri-o://58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd" gracePeriod=2 Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.449706 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.517273 4704 generic.go:334] "Generic (PLEG): container finished" podID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerID="58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd" exitCode=0 Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.517315 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m44hn" event={"ID":"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c","Type":"ContainerDied","Data":"58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd"} Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.517345 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m44hn" event={"ID":"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c","Type":"ContainerDied","Data":"e297b06606bebbbc84704af79ec03fae63860424fb953836ed9db34638b47213"} Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.517347 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m44hn" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.517363 4704 scope.go:117] "RemoveContainer" containerID="58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.534311 4704 scope.go:117] "RemoveContainer" containerID="c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.550857 4704 scope.go:117] "RemoveContainer" containerID="e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.566891 4704 scope.go:117] "RemoveContainer" containerID="58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd" Jan 22 16:32:00 crc kubenswrapper[4704]: E0122 16:32:00.567464 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd\": container with ID starting with 58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd not found: ID does not exist" containerID="58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.567507 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd"} err="failed to get container status \"58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd\": rpc error: code = NotFound desc = could not find container \"58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd\": container with ID starting with 58bdb9780d130bf0a72d6c27e7582e02894f24e901c278c58160efa344f9bacd not found: ID does not exist" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.567537 4704 scope.go:117] "RemoveContainer" containerID="c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07" Jan 22 16:32:00 crc kubenswrapper[4704]: E0122 16:32:00.567907 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07\": container with ID starting with c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07 not found: ID does not exist" containerID="c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.567954 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07"} err="failed to get container status \"c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07\": rpc error: code = NotFound desc = could not find container \"c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07\": container with ID starting with c92dac29a875bb10087e9e88772e36992848a462daa6541dcef0e9af2c597d07 not found: ID does not exist" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.567991 4704 scope.go:117] "RemoveContainer" containerID="e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319" Jan 22 16:32:00 crc kubenswrapper[4704]: E0122 16:32:00.568373 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319\": container with ID starting with e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319 not found: ID does not exist" containerID="e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.568404 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319"} err="failed to get container status \"e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319\": rpc error: code = NotFound desc = could not find container \"e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319\": container with ID starting with e8e7742c69da5997cddf5bd6cf2ce9d59230dac1ab97d44d00d3e92630d9c319 not found: ID does not exist" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.597849 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vvd5\" (UniqueName: \"kubernetes.io/projected/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-kube-api-access-7vvd5\") pod \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.597960 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-catalog-content\") pod \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.597995 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-utilities\") pod \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\" (UID: \"09954e8e-8b14-4f6c-88b9-75cb8fac0f4c\") " Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.598719 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-utilities" (OuterVolumeSpecName: "utilities") pod "09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" (UID: "09954e8e-8b14-4f6c-88b9-75cb8fac0f4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.604035 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-kube-api-access-7vvd5" (OuterVolumeSpecName: "kube-api-access-7vvd5") pod "09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" (UID: "09954e8e-8b14-4f6c-88b9-75cb8fac0f4c"). InnerVolumeSpecName "kube-api-access-7vvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.699363 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.699401 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vvd5\" (UniqueName: \"kubernetes.io/projected/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-kube-api-access-7vvd5\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.732823 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" (UID: "09954e8e-8b14-4f6c-88b9-75cb8fac0f4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.800897 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.838293 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m44hn"] Jan 22 16:32:00 crc kubenswrapper[4704]: I0122 16:32:00.843286 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m44hn"] Jan 22 16:32:01 crc kubenswrapper[4704]: I0122 16:32:01.641590 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" path="/var/lib/kubelet/pods/09954e8e-8b14-4f6c-88b9-75cb8fac0f4c/volumes" Jan 22 16:32:03 crc kubenswrapper[4704]: I0122 16:32:03.080992 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:32:03 crc kubenswrapper[4704]: I0122 16:32:03.081036 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:32:03 crc kubenswrapper[4704]: I0122 16:32:03.127279 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:32:03 crc kubenswrapper[4704]: I0122 16:32:03.520431 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:32:03 crc kubenswrapper[4704]: I0122 16:32:03.520540 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:32:03 crc kubenswrapper[4704]: I0122 16:32:03.593429 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:32:03 crc kubenswrapper[4704]: I0122 16:32:03.640223 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:32:05 crc kubenswrapper[4704]: I0122 16:32:05.316765 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:32:05 crc kubenswrapper[4704]: I0122 16:32:05.316848 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:32:05 crc kubenswrapper[4704]: I0122 16:32:05.367496 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:32:05 crc kubenswrapper[4704]: I0122 16:32:05.592126 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:32:05 crc kubenswrapper[4704]: I0122 16:32:05.677938 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:32:05 crc kubenswrapper[4704]: I0122 16:32:05.678004 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:32:05 crc kubenswrapper[4704]: I0122 16:32:05.714930 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:32:06 crc kubenswrapper[4704]: I0122 16:32:06.292034 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:32:06 crc kubenswrapper[4704]: I0122 16:32:06.292850 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:32:06 crc kubenswrapper[4704]: I0122 16:32:06.332729 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:32:06 crc kubenswrapper[4704]: I0122 16:32:06.464001 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ws5kw"] Jan 22 16:32:06 crc kubenswrapper[4704]: I0122 16:32:06.464245 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ws5kw" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="registry-server" containerID="cri-o://408c700fd5d516d811885e848d436d86d8b2b25bda68dfe829156de6f42b989c" gracePeriod=2 Jan 22 16:32:06 crc kubenswrapper[4704]: I0122 16:32:06.589686 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:32:06 crc kubenswrapper[4704]: I0122 16:32:06.615285 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:32:08 crc kubenswrapper[4704]: I0122 16:32:08.261550 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vsbg"] Jan 22 16:32:08 crc kubenswrapper[4704]: I0122 16:32:08.564032 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8vsbg" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="registry-server" containerID="cri-o://f25b65a576d88b391238612853c5ab8a3236a12a765590ea31572bbfbc003914" gracePeriod=2 Jan 22 16:32:08 crc kubenswrapper[4704]: I0122 16:32:08.618896 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" podUID="aef72b7b-ce60-41c1-903a-16ebddec4d6f" containerName="oauth-openshift" containerID="cri-o://fa45ff431904d842dabcc7332822f93ecd838e4e1348f8d3b994f8e80f4d432b" gracePeriod=15 Jan 22 16:32:09 crc kubenswrapper[4704]: I0122 16:32:09.571495 4704 generic.go:334] "Generic (PLEG): container finished" podID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerID="f25b65a576d88b391238612853c5ab8a3236a12a765590ea31572bbfbc003914" exitCode=0 Jan 22 16:32:09 crc kubenswrapper[4704]: I0122 16:32:09.571566 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vsbg" event={"ID":"97a2a078-75ba-4e1b-b477-4c076b1be529","Type":"ContainerDied","Data":"f25b65a576d88b391238612853c5ab8a3236a12a765590ea31572bbfbc003914"} Jan 22 16:32:09 crc kubenswrapper[4704]: I0122 16:32:09.574155 4704 generic.go:334] "Generic (PLEG): container finished" podID="aef72b7b-ce60-41c1-903a-16ebddec4d6f" containerID="fa45ff431904d842dabcc7332822f93ecd838e4e1348f8d3b994f8e80f4d432b" exitCode=0 Jan 22 16:32:09 crc kubenswrapper[4704]: I0122 16:32:09.574217 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" event={"ID":"aef72b7b-ce60-41c1-903a-16ebddec4d6f","Type":"ContainerDied","Data":"fa45ff431904d842dabcc7332822f93ecd838e4e1348f8d3b994f8e80f4d432b"} Jan 22 16:32:09 crc kubenswrapper[4704]: I0122 16:32:09.576212 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ws5kw_bd467440-1ed0-4085-b8d6-e4245de4ffda/registry-server/0.log" Jan 22 16:32:09 crc kubenswrapper[4704]: I0122 16:32:09.577014 4704 generic.go:334] "Generic (PLEG): container finished" podID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerID="408c700fd5d516d811885e848d436d86d8b2b25bda68dfe829156de6f42b989c" exitCode=137 Jan 22 16:32:09 crc kubenswrapper[4704]: I0122 16:32:09.577056 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ws5kw" event={"ID":"bd467440-1ed0-4085-b8d6-e4245de4ffda","Type":"ContainerDied","Data":"408c700fd5d516d811885e848d436d86d8b2b25bda68dfe829156de6f42b989c"} Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.835531 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ws5kw_bd467440-1ed0-4085-b8d6-e4245de4ffda/registry-server/0.log" Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.836519 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.837938 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.937833 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jct6\" (UniqueName: \"kubernetes.io/projected/97a2a078-75ba-4e1b-b477-4c076b1be529-kube-api-access-5jct6\") pod \"97a2a078-75ba-4e1b-b477-4c076b1be529\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.937886 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-utilities\") pod \"bd467440-1ed0-4085-b8d6-e4245de4ffda\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.937922 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-catalog-content\") pod \"bd467440-1ed0-4085-b8d6-e4245de4ffda\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.937991 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-utilities\") pod \"97a2a078-75ba-4e1b-b477-4c076b1be529\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.938064 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpx9n\" (UniqueName: \"kubernetes.io/projected/bd467440-1ed0-4085-b8d6-e4245de4ffda-kube-api-access-jpx9n\") pod \"bd467440-1ed0-4085-b8d6-e4245de4ffda\" (UID: \"bd467440-1ed0-4085-b8d6-e4245de4ffda\") " Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.938100 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-catalog-content\") pod \"97a2a078-75ba-4e1b-b477-4c076b1be529\" (UID: \"97a2a078-75ba-4e1b-b477-4c076b1be529\") " Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.938997 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-utilities" (OuterVolumeSpecName: "utilities") pod "97a2a078-75ba-4e1b-b477-4c076b1be529" (UID: "97a2a078-75ba-4e1b-b477-4c076b1be529"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.939006 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-utilities" (OuterVolumeSpecName: "utilities") pod "bd467440-1ed0-4085-b8d6-e4245de4ffda" (UID: "bd467440-1ed0-4085-b8d6-e4245de4ffda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.948018 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a2a078-75ba-4e1b-b477-4c076b1be529-kube-api-access-5jct6" (OuterVolumeSpecName: "kube-api-access-5jct6") pod "97a2a078-75ba-4e1b-b477-4c076b1be529" (UID: "97a2a078-75ba-4e1b-b477-4c076b1be529"). InnerVolumeSpecName "kube-api-access-5jct6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.950360 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd467440-1ed0-4085-b8d6-e4245de4ffda-kube-api-access-jpx9n" (OuterVolumeSpecName: "kube-api-access-jpx9n") pod "bd467440-1ed0-4085-b8d6-e4245de4ffda" (UID: "bd467440-1ed0-4085-b8d6-e4245de4ffda"). InnerVolumeSpecName "kube-api-access-jpx9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:10 crc kubenswrapper[4704]: I0122 16:32:10.959303 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97a2a078-75ba-4e1b-b477-4c076b1be529" (UID: "97a2a078-75ba-4e1b-b477-4c076b1be529"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.005912 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd467440-1ed0-4085-b8d6-e4245de4ffda" (UID: "bd467440-1ed0-4085-b8d6-e4245de4ffda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.030586 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.040123 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpx9n\" (UniqueName: \"kubernetes.io/projected/bd467440-1ed0-4085-b8d6-e4245de4ffda-kube-api-access-jpx9n\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.040164 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.040176 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jct6\" (UniqueName: \"kubernetes.io/projected/97a2a078-75ba-4e1b-b477-4c076b1be529-kube-api-access-5jct6\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.040188 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.040204 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd467440-1ed0-4085-b8d6-e4245de4ffda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.040215 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a2a078-75ba-4e1b-b477-4c076b1be529-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140583 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-login\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140652 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-dir\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140693 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-cliconfig\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140744 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-error\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140786 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-service-ca\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140843 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-trusted-ca-bundle\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140886 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-policies\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140939 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-router-certs\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.140974 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-session\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.141012 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-serving-cert\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.141053 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-ocp-branding-template\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.141087 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-idp-0-file-data\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.141112 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-provider-selection\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.141147 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zgdw\" (UniqueName: \"kubernetes.io/projected/aef72b7b-ce60-41c1-903a-16ebddec4d6f-kube-api-access-4zgdw\") pod \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\" (UID: \"aef72b7b-ce60-41c1-903a-16ebddec4d6f\") " Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.142046 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.142207 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.142533 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.142555 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.142868 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.143860 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.145264 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.145398 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef72b7b-ce60-41c1-903a-16ebddec4d6f-kube-api-access-4zgdw" (OuterVolumeSpecName: "kube-api-access-4zgdw") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "kube-api-access-4zgdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.145508 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.145802 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.145833 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.145933 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.146100 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.146320 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "aef72b7b-ce60-41c1-903a-16ebddec4d6f" (UID: "aef72b7b-ce60-41c1-903a-16ebddec4d6f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242751 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242838 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242863 4704 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242881 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242900 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242918 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242936 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242956 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242975 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.242995 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zgdw\" (UniqueName: \"kubernetes.io/projected/aef72b7b-ce60-41c1-903a-16ebddec4d6f-kube-api-access-4zgdw\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.243015 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.243034 4704 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aef72b7b-ce60-41c1-903a-16ebddec4d6f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.243051 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.243069 4704 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aef72b7b-ce60-41c1-903a-16ebddec4d6f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.588749 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8vsbg" event={"ID":"97a2a078-75ba-4e1b-b477-4c076b1be529","Type":"ContainerDied","Data":"5380147a69bc400d3db611ba3e8cb9e730c18f4e78655de4c4883c0b9f52069c"} Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.588836 4704 scope.go:117] "RemoveContainer" containerID="f25b65a576d88b391238612853c5ab8a3236a12a765590ea31572bbfbc003914" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.588957 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8vsbg" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.591732 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" event={"ID":"aef72b7b-ce60-41c1-903a-16ebddec4d6f","Type":"ContainerDied","Data":"22a8f86caa6a0bba218ce3af799b77d23dcabb8cfb7104850f856be7fcf999ce"} Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.591817 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l6zs2" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.594114 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ws5kw_bd467440-1ed0-4085-b8d6-e4245de4ffda/registry-server/0.log" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.595161 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ws5kw" event={"ID":"bd467440-1ed0-4085-b8d6-e4245de4ffda","Type":"ContainerDied","Data":"fcd207a9ea6a6a5c509f6f407b00c6101834593217b6cc69a611d165fa3bc011"} Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.595281 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ws5kw" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.620030 4704 scope.go:117] "RemoveContainer" containerID="919d4ca51c3938d14c876acfcfef216c4068f09603e25b8f119845c1d2d5bb53" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.627545 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vsbg"] Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.634207 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8vsbg"] Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.642633 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" path="/var/lib/kubelet/pods/97a2a078-75ba-4e1b-b477-4c076b1be529/volumes" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.643430 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l6zs2"] Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.645288 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l6zs2"] Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.657951 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ws5kw"] Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.659669 4704 scope.go:117] "RemoveContainer" containerID="64da1e28395c2e14079ffcaea7fcd4776598829837d36d4ecb560cc0cbe6058a" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.660166 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ws5kw"] Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.673894 4704 scope.go:117] "RemoveContainer" containerID="fa45ff431904d842dabcc7332822f93ecd838e4e1348f8d3b994f8e80f4d432b" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.686839 4704 scope.go:117] "RemoveContainer" containerID="408c700fd5d516d811885e848d436d86d8b2b25bda68dfe829156de6f42b989c" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.698656 4704 scope.go:117] "RemoveContainer" containerID="d103572b33c74328e49718b7ce4203979529dff095e9bc8a75a826265b7e8691" Jan 22 16:32:11 crc kubenswrapper[4704]: I0122 16:32:11.710560 4704 scope.go:117] "RemoveContainer" containerID="f9411ae2c0e392f15822fb9bdf009a6840a4fccc03effaeaa1f27ad6a13b4e00" Jan 22 16:32:13 crc kubenswrapper[4704]: I0122 16:32:13.124846 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:32:13 crc kubenswrapper[4704]: I0122 16:32:13.642623 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef72b7b-ce60-41c1-903a-16ebddec4d6f" path="/var/lib/kubelet/pods/aef72b7b-ce60-41c1-903a-16ebddec4d6f/volumes" Jan 22 16:32:13 crc kubenswrapper[4704]: I0122 16:32:13.643667 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" path="/var/lib/kubelet/pods/bd467440-1ed0-4085-b8d6-e4245de4ffda/volumes" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786355 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7cf78455b6-69d8s"] Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786801 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786813 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786825 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786831 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786838 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786844 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786856 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786862 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786870 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786876 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786886 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786891 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786899 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786906 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786915 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786922 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786931 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef72b7b-ce60-41c1-903a-16ebddec4d6f" containerName="oauth-openshift" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786938 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef72b7b-ce60-41c1-903a-16ebddec4d6f" containerName="oauth-openshift" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786947 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786954 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786962 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786967 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786974 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786979 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="extract-utilities" Jan 22 16:32:21 crc kubenswrapper[4704]: E0122 16:32:21.786990 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.786996 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="extract-content" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.787080 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a2a078-75ba-4e1b-b477-4c076b1be529" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.787094 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef72b7b-ce60-41c1-903a-16ebddec4d6f" containerName="oauth-openshift" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.787100 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85b5045-b0f3-49cd-97e4-a4c0688313e1" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.787107 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="09954e8e-8b14-4f6c-88b9-75cb8fac0f4c" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.787115 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd467440-1ed0-4085-b8d6-e4245de4ffda" containerName="registry-server" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.787466 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.789220 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.789487 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.791041 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.791089 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.791093 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.791534 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.791576 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.791759 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.792076 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.792133 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.792359 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.792461 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.804236 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cf78455b6-69d8s"] Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.804805 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.814092 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.814504 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984374 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984426 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-audit-policies\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984453 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-session\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984479 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984514 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984574 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984607 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984626 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984642 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984660 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984684 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca3a5666-83f0-4de8-afff-9091a030ee47-audit-dir\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984711 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gpg6\" (UniqueName: \"kubernetes.io/projected/ca3a5666-83f0-4de8-afff-9091a030ee47-kube-api-access-7gpg6\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984726 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:21 crc kubenswrapper[4704]: I0122 16:32:21.984743 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085587 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca3a5666-83f0-4de8-afff-9091a030ee47-audit-dir\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085660 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gpg6\" (UniqueName: \"kubernetes.io/projected/ca3a5666-83f0-4de8-afff-9091a030ee47-kube-api-access-7gpg6\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085681 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085700 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085722 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085740 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-audit-policies\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085758 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-session\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085784 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085831 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085855 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085882 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085899 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085915 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.085933 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.087112 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca3a5666-83f0-4de8-afff-9091a030ee47-audit-dir\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.088232 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.088262 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.088262 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.088551 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca3a5666-83f0-4de8-afff-9091a030ee47-audit-policies\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.092925 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-session\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.093939 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.094196 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.094222 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.094633 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.096265 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.096410 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.100281 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca3a5666-83f0-4de8-afff-9091a030ee47-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.102754 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gpg6\" (UniqueName: \"kubernetes.io/projected/ca3a5666-83f0-4de8-afff-9091a030ee47-kube-api-access-7gpg6\") pod \"oauth-openshift-7cf78455b6-69d8s\" (UID: \"ca3a5666-83f0-4de8-afff-9091a030ee47\") " pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.160786 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.351391 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cf78455b6-69d8s"] Jan 22 16:32:22 crc kubenswrapper[4704]: I0122 16:32:22.668294 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" event={"ID":"ca3a5666-83f0-4de8-afff-9091a030ee47","Type":"ContainerStarted","Data":"f6660ce316fc89a2100f45097cea6b9cd2481d1b2855b85594d61c719d177cde"} Jan 22 16:32:23 crc kubenswrapper[4704]: I0122 16:32:23.674063 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" event={"ID":"ca3a5666-83f0-4de8-afff-9091a030ee47","Type":"ContainerStarted","Data":"238e57a58cda1e5ad6bb3580c074861720b121962a86e8612582ac156834b2e0"} Jan 22 16:32:23 crc kubenswrapper[4704]: I0122 16:32:23.675914 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:23 crc kubenswrapper[4704]: I0122 16:32:23.681753 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" Jan 22 16:32:23 crc kubenswrapper[4704]: I0122 16:32:23.712093 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7cf78455b6-69d8s" podStartSLOduration=40.712071139 podStartE2EDuration="40.712071139s" podCreationTimestamp="2026-01-22 16:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:23.709131032 +0000 UTC m=+236.353677752" watchObservedRunningTime="2026-01-22 16:32:23.712071139 +0000 UTC m=+236.356617829" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.392019 4704 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.392846 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64" gracePeriod=15 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.392900 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22" gracePeriod=15 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.392989 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2" gracePeriod=15 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.393018 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2" gracePeriod=15 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.392862 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205" gracePeriod=15 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.393614 4704 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.393951 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.393984 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.394000 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394011 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.394027 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394038 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.394050 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394060 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.394075 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394085 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.394098 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394109 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394268 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394289 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394303 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394327 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394346 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.394531 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394551 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.394692 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.395928 4704 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.396494 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.401143 4704 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.437139 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443616 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443687 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443722 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443762 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443822 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443846 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443890 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.443915 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545026 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545101 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545170 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545259 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545270 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545310 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545327 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545343 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545384 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545389 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545435 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545440 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545458 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545457 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545493 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.545519 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.695448 4704 generic.go:334] "Generic (PLEG): container finished" podID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" containerID="2fa5d21d56510c86cbc2948f633d8ced4691d38ba87f4301637dc7b54fffa575" exitCode=0 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.695588 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07f8c1e2-21b3-4c4a-a235-8a5bc193719c","Type":"ContainerDied","Data":"2fa5d21d56510c86cbc2948f633d8ced4691d38ba87f4301637dc7b54fffa575"} Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.696665 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.697265 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.698425 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.700336 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.701281 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205" exitCode=0 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.701331 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2" exitCode=0 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.701346 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22" exitCode=0 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.701361 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2" exitCode=2 Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.701393 4704 scope.go:117] "RemoveContainer" containerID="9e9f38475b7eee739b0a85a0320c511e3fb87d53929147aa413368031b8d1368" Jan 22 16:32:26 crc kubenswrapper[4704]: I0122 16:32:26.723654 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:32:26 crc kubenswrapper[4704]: W0122 16:32:26.768397 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-119341452fe99b96875bd6c0568325a6f3c039e59508ffd1bc940184104ddb97 WatchSource:0}: Error finding container 119341452fe99b96875bd6c0568325a6f3c039e59508ffd1bc940184104ddb97: Status 404 returned error can't find the container with id 119341452fe99b96875bd6c0568325a6f3c039e59508ffd1bc940184104ddb97 Jan 22 16:32:26 crc kubenswrapper[4704]: E0122 16:32:26.772880 4704 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d1aaa29a94459 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:32:26.772194393 +0000 UTC m=+239.416741123,LastTimestamp:2026-01-22 16:32:26.772194393 +0000 UTC m=+239.416741123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:32:27 crc kubenswrapper[4704]: E0122 16:32:27.157072 4704 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d1aaa29a94459 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:32:26.772194393 +0000 UTC m=+239.416741123,LastTimestamp:2026-01-22 16:32:26.772194393 +0000 UTC m=+239.416741123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.643041 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.644317 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.711190 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.714553 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5f6543888d2b9afaebc605e617557aa53893dd4dbe461549d3fc00369b8d27a7"} Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.714608 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"119341452fe99b96875bd6c0568325a6f3c039e59508ffd1bc940184104ddb97"} Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.715232 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.715834 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.939701 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.940948 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:27 crc kubenswrapper[4704]: I0122 16:32:27.941511 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.063038 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-var-lock\") pod \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.063178 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kubelet-dir\") pod \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.063177 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-var-lock" (OuterVolumeSpecName: "var-lock") pod "07f8c1e2-21b3-4c4a-a235-8a5bc193719c" (UID: "07f8c1e2-21b3-4c4a-a235-8a5bc193719c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.063218 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07f8c1e2-21b3-4c4a-a235-8a5bc193719c" (UID: "07f8c1e2-21b3-4c4a-a235-8a5bc193719c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.063237 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kube-api-access\") pod \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\" (UID: \"07f8c1e2-21b3-4c4a-a235-8a5bc193719c\") " Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.064090 4704 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.064144 4704 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.071585 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07f8c1e2-21b3-4c4a-a235-8a5bc193719c" (UID: "07f8c1e2-21b3-4c4a-a235-8a5bc193719c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.166137 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07f8c1e2-21b3-4c4a-a235-8a5bc193719c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.725352 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07f8c1e2-21b3-4c4a-a235-8a5bc193719c","Type":"ContainerDied","Data":"7ebca18959f30df16c2446f09698249b99dc1bd676e6c34e9cde909d415273a6"} Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.725582 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebca18959f30df16c2446f09698249b99dc1bd676e6c34e9cde909d415273a6" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.725402 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.743826 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:28 crc kubenswrapper[4704]: I0122 16:32:28.744254 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.359991 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.362518 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.363895 4704 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.364827 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.365099 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483012 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483055 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483132 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483152 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483195 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483259 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483317 4704 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483328 4704 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.483336 4704 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.642201 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.742452 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.744223 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64" exitCode=0 Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.744319 4704 scope.go:117] "RemoveContainer" containerID="1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.744324 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.745259 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.748285 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.748705 4704 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.750851 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.751203 4704 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.751474 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.757018 4704 scope.go:117] "RemoveContainer" containerID="de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.770928 4704 scope.go:117] "RemoveContainer" containerID="c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.784833 4704 scope.go:117] "RemoveContainer" containerID="0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.798346 4704 scope.go:117] "RemoveContainer" containerID="e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.814914 4704 scope.go:117] "RemoveContainer" containerID="e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.836208 4704 scope.go:117] "RemoveContainer" containerID="1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205" Jan 22 16:32:29 crc kubenswrapper[4704]: E0122 16:32:29.836663 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\": container with ID starting with 1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205 not found: ID does not exist" containerID="1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.836704 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205"} err="failed to get container status \"1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\": rpc error: code = NotFound desc = could not find container \"1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205\": container with ID starting with 1212ca7f38fc7f12a34074f47db6b9ff1505ed659c4360a32b43ee77e9f85205 not found: ID does not exist" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.836730 4704 scope.go:117] "RemoveContainer" containerID="de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2" Jan 22 16:32:29 crc kubenswrapper[4704]: E0122 16:32:29.837498 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\": container with ID starting with de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2 not found: ID does not exist" containerID="de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.837521 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2"} err="failed to get container status \"de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\": rpc error: code = NotFound desc = could not find container \"de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2\": container with ID starting with de70e67642d146729ae2e43435c05e79eaf44290d579c978370430aa5142c8a2 not found: ID does not exist" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.837533 4704 scope.go:117] "RemoveContainer" containerID="c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22" Jan 22 16:32:29 crc kubenswrapper[4704]: E0122 16:32:29.837838 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\": container with ID starting with c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22 not found: ID does not exist" containerID="c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.837857 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22"} err="failed to get container status \"c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\": rpc error: code = NotFound desc = could not find container \"c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22\": container with ID starting with c14941f31f59a32cd78c2e3031f3cf23a8b004a0d8ce7e4d0353decbd3fe7a22 not found: ID does not exist" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.837869 4704 scope.go:117] "RemoveContainer" containerID="0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2" Jan 22 16:32:29 crc kubenswrapper[4704]: E0122 16:32:29.839417 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\": container with ID starting with 0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2 not found: ID does not exist" containerID="0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.839542 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2"} err="failed to get container status \"0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\": rpc error: code = NotFound desc = could not find container \"0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2\": container with ID starting with 0b3174ef634230f7c4a67609ea00ebfef945b6ffa3616235dfb8aa9ee6da9aa2 not found: ID does not exist" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.839558 4704 scope.go:117] "RemoveContainer" containerID="e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64" Jan 22 16:32:29 crc kubenswrapper[4704]: E0122 16:32:29.840141 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\": container with ID starting with e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64 not found: ID does not exist" containerID="e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.840178 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64"} err="failed to get container status \"e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\": rpc error: code = NotFound desc = could not find container \"e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64\": container with ID starting with e141b9a4a2f51c630b60fdb5d8812b513e3ac952180a841b52dca6a75c7dab64 not found: ID does not exist" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.840225 4704 scope.go:117] "RemoveContainer" containerID="e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb" Jan 22 16:32:29 crc kubenswrapper[4704]: E0122 16:32:29.840640 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\": container with ID starting with e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb not found: ID does not exist" containerID="e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb" Jan 22 16:32:29 crc kubenswrapper[4704]: I0122 16:32:29.840690 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb"} err="failed to get container status \"e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\": rpc error: code = NotFound desc = could not find container \"e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb\": container with ID starting with e0fb3231d20039c1c50052f51c6d0c0b62fa7ac707b9d1b921f6cd07a4a371bb not found: ID does not exist" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.505765 4704 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.506337 4704 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.506559 4704 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.506749 4704 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.507078 4704 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:33 crc kubenswrapper[4704]: I0122 16:32:33.507134 4704 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.507541 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="200ms" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.641721 4704 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.249:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" volumeName="registry-storage" Jan 22 16:32:33 crc kubenswrapper[4704]: E0122 16:32:33.708722 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="400ms" Jan 22 16:32:34 crc kubenswrapper[4704]: E0122 16:32:34.110496 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="800ms" Jan 22 16:32:34 crc kubenswrapper[4704]: E0122 16:32:34.911556 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="1.6s" Jan 22 16:32:36 crc kubenswrapper[4704]: E0122 16:32:36.511888 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="3.2s" Jan 22 16:32:37 crc kubenswrapper[4704]: E0122 16:32:37.158237 4704 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.249:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d1aaa29a94459 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 16:32:26.772194393 +0000 UTC m=+239.416741123,LastTimestamp:2026-01-22 16:32:26.772194393 +0000 UTC m=+239.416741123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 16:32:37 crc kubenswrapper[4704]: I0122 16:32:37.635561 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:37 crc kubenswrapper[4704]: I0122 16:32:37.636148 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:39 crc kubenswrapper[4704]: E0122 16:32:39.713863 4704 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.249:6443: connect: connection refused" interval="6.4s" Jan 22 16:32:40 crc kubenswrapper[4704]: I0122 16:32:40.632933 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:40 crc kubenswrapper[4704]: I0122 16:32:40.634218 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:40 crc kubenswrapper[4704]: I0122 16:32:40.634647 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:40 crc kubenswrapper[4704]: I0122 16:32:40.651133 4704 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:40 crc kubenswrapper[4704]: I0122 16:32:40.651179 4704 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:40 crc kubenswrapper[4704]: E0122 16:32:40.651678 4704 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:40 crc kubenswrapper[4704]: I0122 16:32:40.652639 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:40 crc kubenswrapper[4704]: W0122 16:32:40.678854 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b7e92650d8100d04f0d3ef637f02bfd556ecd7d80a89e40df7a311cd7e7526ff WatchSource:0}: Error finding container b7e92650d8100d04f0d3ef637f02bfd556ecd7d80a89e40df7a311cd7e7526ff: Status 404 returned error can't find the container with id b7e92650d8100d04f0d3ef637f02bfd556ecd7d80a89e40df7a311cd7e7526ff Jan 22 16:32:40 crc kubenswrapper[4704]: I0122 16:32:40.811712 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b7e92650d8100d04f0d3ef637f02bfd556ecd7d80a89e40df7a311cd7e7526ff"} Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.821267 4704 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e2c558a436c0b7391a56725ef8e2dc062df54018288b3462a24832457d416d98" exitCode=0 Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.822034 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e2c558a436c0b7391a56725ef8e2dc062df54018288b3462a24832457d416d98"} Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.822320 4704 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.822347 4704 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:41 crc kubenswrapper[4704]: E0122 16:32:41.822910 4704 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.822950 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.823341 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.826912 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.826981 4704 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf" exitCode=1 Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.827018 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf"} Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.827672 4704 scope.go:117] "RemoveContainer" containerID="970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.828599 4704 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.829220 4704 status_manager.go:851] "Failed to get status for pod" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:41 crc kubenswrapper[4704]: I0122 16:32:41.829840 4704 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.249:6443: connect: connection refused" Jan 22 16:32:42 crc kubenswrapper[4704]: I0122 16:32:42.835660 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e705f8631afe12da9e4eceb4794bcf04ad31de200f573205619ed9ddaee98a1"} Jan 22 16:32:42 crc kubenswrapper[4704]: I0122 16:32:42.835910 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ad5218171045a48439668419cded1054879161386bce375462851c4a15f9c15"} Jan 22 16:32:42 crc kubenswrapper[4704]: I0122 16:32:42.835920 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d72166e0e17ec7d06da9e6d0267cef54e4bc7ab50cd0ad849a3466356dd9d9db"} Jan 22 16:32:42 crc kubenswrapper[4704]: I0122 16:32:42.835928 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"468bf3d865f14b4485b1c4c62738b09e7b7307f304ca4d0c5825fb6b30f7e22e"} Jan 22 16:32:42 crc kubenswrapper[4704]: I0122 16:32:42.840027 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 16:32:42 crc kubenswrapper[4704]: I0122 16:32:42.840063 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"613029232ca94f14253f359f4025d031069db318c937c035989150a8d572e928"} Jan 22 16:32:43 crc kubenswrapper[4704]: I0122 16:32:43.847459 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"817dbeb73cb605e27454869125e0798ac310e1569191890f68d0b0020204caa4"} Jan 22 16:32:43 crc kubenswrapper[4704]: I0122 16:32:43.848047 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:43 crc kubenswrapper[4704]: I0122 16:32:43.847732 4704 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:43 crc kubenswrapper[4704]: I0122 16:32:43.848076 4704 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:45 crc kubenswrapper[4704]: I0122 16:32:45.016777 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:32:45 crc kubenswrapper[4704]: I0122 16:32:45.016955 4704 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 16:32:45 crc kubenswrapper[4704]: I0122 16:32:45.016996 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 16:32:45 crc kubenswrapper[4704]: I0122 16:32:45.653238 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:45 crc kubenswrapper[4704]: I0122 16:32:45.653822 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:45 crc kubenswrapper[4704]: I0122 16:32:45.661193 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:46 crc kubenswrapper[4704]: I0122 16:32:46.926976 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:32:48 crc kubenswrapper[4704]: I0122 16:32:48.864383 4704 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:48 crc kubenswrapper[4704]: I0122 16:32:48.891015 4704 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:48 crc kubenswrapper[4704]: I0122 16:32:48.891266 4704 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:48 crc kubenswrapper[4704]: I0122 16:32:48.894138 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:32:48 crc kubenswrapper[4704]: I0122 16:32:48.896233 4704 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="55e14c14-0c32-4761-a1ce-01ea1bb2b74a" Jan 22 16:32:49 crc kubenswrapper[4704]: I0122 16:32:49.897192 4704 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:49 crc kubenswrapper[4704]: I0122 16:32:49.897219 4704 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d30d8677-1d99-406b-af8d-fd0c5c7a643d" Jan 22 16:32:55 crc kubenswrapper[4704]: I0122 16:32:55.016340 4704 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 16:32:55 crc kubenswrapper[4704]: I0122 16:32:55.016683 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 16:32:57 crc kubenswrapper[4704]: I0122 16:32:57.643319 4704 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="55e14c14-0c32-4761-a1ce-01ea1bb2b74a" Jan 22 16:32:58 crc kubenswrapper[4704]: I0122 16:32:58.986909 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.029589 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.165166 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.229858 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.292094 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.434562 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.495349 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.505133 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.657522 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 16:32:59 crc kubenswrapper[4704]: I0122 16:32:59.776737 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.195744 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.560600 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.560889 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.777741 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.796836 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.830009 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.887161 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 16:33:00 crc kubenswrapper[4704]: I0122 16:33:00.908156 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.155532 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.197246 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.278683 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.285067 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.324667 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.567858 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.692592 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.743086 4704 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.774931 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 16:33:01 crc kubenswrapper[4704]: I0122 16:33:01.796353 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.113680 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.118202 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.233372 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.483787 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.495986 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.496054 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.507152 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.522004 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.566689 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.649942 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.725628 4704 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.736096 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.829758 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.840822 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.848364 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.923513 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 16:33:02 crc kubenswrapper[4704]: I0122 16:33:02.947960 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.066124 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.085872 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.117608 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.128743 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.198614 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.235076 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.260429 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.263084 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.271432 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.282425 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.287729 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.339085 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.339826 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.405491 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.495959 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.503609 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.507660 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.537967 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.583416 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.584237 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.602693 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.608588 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.657186 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.668088 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.831457 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.835135 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.875931 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.902681 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 16:33:03 crc kubenswrapper[4704]: I0122 16:33:03.972456 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.037838 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.212401 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.290565 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.300486 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.305169 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.330662 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.331038 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.367199 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.396813 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.400475 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.405595 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.428935 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.433657 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.445858 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.492145 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.533554 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.677162 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.684118 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.705028 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.714658 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.720703 4704 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.727203 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.769457 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.775380 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.898115 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.930420 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.937870 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.977481 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 16:33:04 crc kubenswrapper[4704]: I0122 16:33:04.991759 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.017096 4704 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.017142 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.017183 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.017694 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"613029232ca94f14253f359f4025d031069db318c937c035989150a8d572e928"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.017786 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://613029232ca94f14253f359f4025d031069db318c937c035989150a8d572e928" gracePeriod=30 Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.141418 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.148492 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.163506 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.166684 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.228469 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.355316 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.371663 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.408352 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.419611 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.437265 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.459102 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.472753 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.510905 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.762821 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.798011 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.874993 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.891627 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.924835 4704 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 16:33:05 crc kubenswrapper[4704]: I0122 16:33:05.987429 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.043724 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.102589 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.146188 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.255667 4704 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.274207 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.505945 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.545520 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.548180 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.556092 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.665918 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.738558 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.760209 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.776441 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.814055 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.862995 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.890311 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 16:33:06 crc kubenswrapper[4704]: I0122 16:33:06.946314 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.100606 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.206022 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.263888 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.279016 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.289115 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.307402 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.334303 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.406261 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.423648 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.425671 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.448241 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.467812 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.522892 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.563748 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.584068 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.644607 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.649401 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.651198 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.742956 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.758755 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.842468 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.859884 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 16:33:07 crc kubenswrapper[4704]: I0122 16:33:07.937592 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.158081 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.244065 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.306771 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.364770 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.390504 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.412325 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.434247 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.515540 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.561898 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.604140 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.670650 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.731962 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.745850 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.746964 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.846677 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.898572 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 16:33:08 crc kubenswrapper[4704]: I0122 16:33:08.974203 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.069490 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.094079 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.245485 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.324290 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.365241 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.381918 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.383625 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.383595276 podStartE2EDuration="43.383595276s" podCreationTimestamp="2026-01-22 16:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:32:48.755652979 +0000 UTC m=+261.400199699" watchObservedRunningTime="2026-01-22 16:33:09.383595276 +0000 UTC m=+282.028142016" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.390916 4704 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.393597 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.393709 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.402426 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.404979 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.419073 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.419047335 podStartE2EDuration="21.419047335s" podCreationTimestamp="2026-01-22 16:32:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:33:09.416482897 +0000 UTC m=+282.061029667" watchObservedRunningTime="2026-01-22 16:33:09.419047335 +0000 UTC m=+282.063594075" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.505912 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.559770 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.593078 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.640148 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.666230 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.700302 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.708111 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.799950 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.898853 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.902388 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:33:09 crc kubenswrapper[4704]: I0122 16:33:09.970715 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:09.981275 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.000831 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.005629 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.006029 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.087219 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.311414 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.331667 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.333659 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.342302 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.342341 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.593474 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.704292 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.737922 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.738784 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 16:33:10 crc kubenswrapper[4704]: I0122 16:33:10.846520 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.021165 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.072999 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.128059 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.326912 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.364040 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.456051 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.495986 4704 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.496280 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://5f6543888d2b9afaebc605e617557aa53893dd4dbe461549d3fc00369b8d27a7" gracePeriod=5 Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.545518 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.631260 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.662889 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.666919 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.703573 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.708880 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.795942 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.891940 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 16:33:11 crc kubenswrapper[4704]: I0122 16:33:11.897266 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.020200 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.048330 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.146504 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.382293 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.488830 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.542092 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.656831 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.742269 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 16:33:12 crc kubenswrapper[4704]: I0122 16:33:12.866443 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.010550 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.080189 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.080497 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.141427 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.310499 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.312419 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.369069 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.495047 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.503595 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 16:33:13 crc kubenswrapper[4704]: I0122 16:33:13.823607 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 16:33:14 crc kubenswrapper[4704]: I0122 16:33:14.046731 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 16:33:14 crc kubenswrapper[4704]: I0122 16:33:14.072598 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 16:33:14 crc kubenswrapper[4704]: I0122 16:33:14.089129 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.070653 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.071125 4704 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="5f6543888d2b9afaebc605e617557aa53893dd4dbe461549d3fc00369b8d27a7" exitCode=137 Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.071192 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="119341452fe99b96875bd6c0568325a6f3c039e59508ffd1bc940184104ddb97" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.072586 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.072678 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108442 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108546 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108579 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108600 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108605 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108630 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108683 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108752 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.108749 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.109169 4704 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.109190 4704 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.109203 4704 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.109213 4704 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.117314 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.210847 4704 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.641413 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.641642 4704 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.651598 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.651633 4704 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="de0fbcfe-2eed-4b72-877e-793fc4496e1d" Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.654964 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 16:33:17 crc kubenswrapper[4704]: I0122 16:33:17.654999 4704 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="de0fbcfe-2eed-4b72-877e-793fc4496e1d" Jan 22 16:33:18 crc kubenswrapper[4704]: I0122 16:33:18.077295 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 16:33:27 crc kubenswrapper[4704]: I0122 16:33:27.493278 4704 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.227323 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lvsjg"] Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.228111 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" podUID="08014b73-1836-45da-a3fa-8a05ad57ebad" containerName="controller-manager" containerID="cri-o://5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638" gracePeriod=30 Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.235672 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp"] Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.236093 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" podUID="5816a839-8a48-4e39-ae5e-82df31d282df" containerName="route-controller-manager" containerID="cri-o://42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56" gracePeriod=30 Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.639970 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.644779 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826186 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5816a839-8a48-4e39-ae5e-82df31d282df-serving-cert\") pod \"5816a839-8a48-4e39-ae5e-82df31d282df\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826241 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-config\") pod \"08014b73-1836-45da-a3fa-8a05ad57ebad\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826282 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles\") pod \"08014b73-1836-45da-a3fa-8a05ad57ebad\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826308 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert\") pod \"08014b73-1836-45da-a3fa-8a05ad57ebad\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826327 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-client-ca\") pod \"5816a839-8a48-4e39-ae5e-82df31d282df\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826376 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-config\") pod \"5816a839-8a48-4e39-ae5e-82df31d282df\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826391 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8q7m\" (UniqueName: \"kubernetes.io/projected/08014b73-1836-45da-a3fa-8a05ad57ebad-kube-api-access-p8q7m\") pod \"08014b73-1836-45da-a3fa-8a05ad57ebad\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826417 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca\") pod \"08014b73-1836-45da-a3fa-8a05ad57ebad\" (UID: \"08014b73-1836-45da-a3fa-8a05ad57ebad\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.826457 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzqqg\" (UniqueName: \"kubernetes.io/projected/5816a839-8a48-4e39-ae5e-82df31d282df-kube-api-access-kzqqg\") pod \"5816a839-8a48-4e39-ae5e-82df31d282df\" (UID: \"5816a839-8a48-4e39-ae5e-82df31d282df\") " Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.827270 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-config" (OuterVolumeSpecName: "config") pod "08014b73-1836-45da-a3fa-8a05ad57ebad" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.827447 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-client-ca" (OuterVolumeSpecName: "client-ca") pod "5816a839-8a48-4e39-ae5e-82df31d282df" (UID: "5816a839-8a48-4e39-ae5e-82df31d282df"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.827487 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "08014b73-1836-45da-a3fa-8a05ad57ebad" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.827717 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-config" (OuterVolumeSpecName: "config") pod "5816a839-8a48-4e39-ae5e-82df31d282df" (UID: "5816a839-8a48-4e39-ae5e-82df31d282df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.827739 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca" (OuterVolumeSpecName: "client-ca") pod "08014b73-1836-45da-a3fa-8a05ad57ebad" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.831954 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5816a839-8a48-4e39-ae5e-82df31d282df-kube-api-access-kzqqg" (OuterVolumeSpecName: "kube-api-access-kzqqg") pod "5816a839-8a48-4e39-ae5e-82df31d282df" (UID: "5816a839-8a48-4e39-ae5e-82df31d282df"). InnerVolumeSpecName "kube-api-access-kzqqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.831986 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "08014b73-1836-45da-a3fa-8a05ad57ebad" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.832015 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08014b73-1836-45da-a3fa-8a05ad57ebad-kube-api-access-p8q7m" (OuterVolumeSpecName: "kube-api-access-p8q7m") pod "08014b73-1836-45da-a3fa-8a05ad57ebad" (UID: "08014b73-1836-45da-a3fa-8a05ad57ebad"). InnerVolumeSpecName "kube-api-access-p8q7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.838071 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5816a839-8a48-4e39-ae5e-82df31d282df-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5816a839-8a48-4e39-ae5e-82df31d282df" (UID: "5816a839-8a48-4e39-ae5e-82df31d282df"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927718 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927767 4704 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927783 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08014b73-1836-45da-a3fa-8a05ad57ebad-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927810 4704 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927821 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5816a839-8a48-4e39-ae5e-82df31d282df-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927830 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8q7m\" (UniqueName: \"kubernetes.io/projected/08014b73-1836-45da-a3fa-8a05ad57ebad-kube-api-access-p8q7m\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927839 4704 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08014b73-1836-45da-a3fa-8a05ad57ebad-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927847 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzqqg\" (UniqueName: \"kubernetes.io/projected/5816a839-8a48-4e39-ae5e-82df31d282df-kube-api-access-kzqqg\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:32 crc kubenswrapper[4704]: I0122 16:33:32.927854 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5816a839-8a48-4e39-ae5e-82df31d282df-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.198410 4704 generic.go:334] "Generic (PLEG): container finished" podID="08014b73-1836-45da-a3fa-8a05ad57ebad" containerID="5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638" exitCode=0 Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.198479 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.198498 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" event={"ID":"08014b73-1836-45da-a3fa-8a05ad57ebad","Type":"ContainerDied","Data":"5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638"} Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.198528 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lvsjg" event={"ID":"08014b73-1836-45da-a3fa-8a05ad57ebad","Type":"ContainerDied","Data":"915fc3f33b7c5d97f8f307690aeacc4012a6aafd775e308d238ec46b3dc456a3"} Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.198555 4704 scope.go:117] "RemoveContainer" containerID="5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.201271 4704 generic.go:334] "Generic (PLEG): container finished" podID="5816a839-8a48-4e39-ae5e-82df31d282df" containerID="42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56" exitCode=0 Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.201321 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.201335 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" event={"ID":"5816a839-8a48-4e39-ae5e-82df31d282df","Type":"ContainerDied","Data":"42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56"} Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.201355 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp" event={"ID":"5816a839-8a48-4e39-ae5e-82df31d282df","Type":"ContainerDied","Data":"8c6cc3e869d47ce0a49bec3454eae50adcd94850ff9dcebe094e8c7699c13b44"} Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.204750 4704 generic.go:334] "Generic (PLEG): container finished" podID="a30726df-cfa8-4da0-9aa6-419437441379" containerID="0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8" exitCode=0 Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.204785 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" event={"ID":"a30726df-cfa8-4da0-9aa6-419437441379","Type":"ContainerDied","Data":"0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8"} Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.205152 4704 scope.go:117] "RemoveContainer" containerID="0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.219811 4704 scope.go:117] "RemoveContainer" containerID="5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638" Jan 22 16:33:33 crc kubenswrapper[4704]: E0122 16:33:33.220652 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638\": container with ID starting with 5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638 not found: ID does not exist" containerID="5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.220700 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638"} err="failed to get container status \"5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638\": rpc error: code = NotFound desc = could not find container \"5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638\": container with ID starting with 5348a2aac90f306f71336017fe3afb713cef00bbdcdcf372add13981806dc638 not found: ID does not exist" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.220734 4704 scope.go:117] "RemoveContainer" containerID="42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.243083 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lvsjg"] Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.248813 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lvsjg"] Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.252631 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp"] Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.255853 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-97vvp"] Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.258272 4704 scope.go:117] "RemoveContainer" containerID="42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56" Jan 22 16:33:33 crc kubenswrapper[4704]: E0122 16:33:33.258901 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56\": container with ID starting with 42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56 not found: ID does not exist" containerID="42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.258935 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56"} err="failed to get container status \"42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56\": rpc error: code = NotFound desc = could not find container \"42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56\": container with ID starting with 42697af60dd5416ba373b05e6b4b3bf3e89f389656830b9f69bf3fff15713c56 not found: ID does not exist" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.641125 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08014b73-1836-45da-a3fa-8a05ad57ebad" path="/var/lib/kubelet/pods/08014b73-1836-45da-a3fa-8a05ad57ebad/volumes" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.641991 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5816a839-8a48-4e39-ae5e-82df31d282df" path="/var/lib/kubelet/pods/5816a839-8a48-4e39-ae5e-82df31d282df/volumes" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829427 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d"] Jan 22 16:33:33 crc kubenswrapper[4704]: E0122 16:33:33.829697 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829714 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 16:33:33 crc kubenswrapper[4704]: E0122 16:33:33.829728 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" containerName="installer" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829735 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" containerName="installer" Jan 22 16:33:33 crc kubenswrapper[4704]: E0122 16:33:33.829748 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5816a839-8a48-4e39-ae5e-82df31d282df" containerName="route-controller-manager" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829754 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5816a839-8a48-4e39-ae5e-82df31d282df" containerName="route-controller-manager" Jan 22 16:33:33 crc kubenswrapper[4704]: E0122 16:33:33.829763 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08014b73-1836-45da-a3fa-8a05ad57ebad" containerName="controller-manager" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829771 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="08014b73-1836-45da-a3fa-8a05ad57ebad" containerName="controller-manager" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829879 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="08014b73-1836-45da-a3fa-8a05ad57ebad" containerName="controller-manager" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829889 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f8c1e2-21b3-4c4a-a235-8a5bc193719c" containerName="installer" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829896 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5816a839-8a48-4e39-ae5e-82df31d282df" containerName="route-controller-manager" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.829903 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.830275 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.832467 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.832967 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.833246 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.833302 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.833411 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.833488 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.834073 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-zpx59"] Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.834743 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.837142 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.837372 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.837592 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.838093 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.839073 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.839378 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.841989 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-zpx59"] Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.846505 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d"] Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.847970 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.940813 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-client-ca\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.940867 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-config\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.940889 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dadfe-8db4-46c6-b158-4c91bd49e66c-serving-cert\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.941245 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5jxj\" (UniqueName: \"kubernetes.io/projected/1748f477-b0f3-476c-9f36-798e643df641-kube-api-access-q5jxj\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.941356 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-client-ca\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.941695 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1748f477-b0f3-476c-9f36-798e643df641-serving-cert\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.942506 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-proxy-ca-bundles\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.942644 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhxqc\" (UniqueName: \"kubernetes.io/projected/c66dadfe-8db4-46c6-b158-4c91bd49e66c-kube-api-access-fhxqc\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:33 crc kubenswrapper[4704]: I0122 16:33:33.942683 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-config\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.043867 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dadfe-8db4-46c6-b158-4c91bd49e66c-serving-cert\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.044216 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5jxj\" (UniqueName: \"kubernetes.io/projected/1748f477-b0f3-476c-9f36-798e643df641-kube-api-access-q5jxj\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.044360 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-client-ca\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.044541 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1748f477-b0f3-476c-9f36-798e643df641-serving-cert\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.044700 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-proxy-ca-bundles\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.044956 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhxqc\" (UniqueName: \"kubernetes.io/projected/c66dadfe-8db4-46c6-b158-4c91bd49e66c-kube-api-access-fhxqc\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.045125 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-config\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.045279 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-client-ca\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.045425 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-config\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.046262 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-client-ca\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.046470 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-config\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.047126 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-client-ca\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.047497 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-config\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.048109 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-proxy-ca-bundles\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.051209 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1748f477-b0f3-476c-9f36-798e643df641-serving-cert\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.051505 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dadfe-8db4-46c6-b158-4c91bd49e66c-serving-cert\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.062248 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhxqc\" (UniqueName: \"kubernetes.io/projected/c66dadfe-8db4-46c6-b158-4c91bd49e66c-kube-api-access-fhxqc\") pod \"controller-manager-77f667dfdd-zpx59\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.064384 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5jxj\" (UniqueName: \"kubernetes.io/projected/1748f477-b0f3-476c-9f36-798e643df641-kube-api-access-q5jxj\") pod \"route-controller-manager-7f9fc89966-r7z6d\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.150181 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.157537 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.216496 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" event={"ID":"a30726df-cfa8-4da0-9aa6-419437441379","Type":"ContainerStarted","Data":"7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007"} Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.218130 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.219051 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.455085 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-zpx59"] Jan 22 16:33:34 crc kubenswrapper[4704]: I0122 16:33:34.607265 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d"] Jan 22 16:33:34 crc kubenswrapper[4704]: W0122 16:33:34.613269 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1748f477_b0f3_476c_9f36_798e643df641.slice/crio-8300c255df58c05b876de166378da0ab4fbf9aeb55c8fe557c92b6c2484ba800 WatchSource:0}: Error finding container 8300c255df58c05b876de166378da0ab4fbf9aeb55c8fe557c92b6c2484ba800: Status 404 returned error can't find the container with id 8300c255df58c05b876de166378da0ab4fbf9aeb55c8fe557c92b6c2484ba800 Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.232319 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.234525 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.234586 4704 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="613029232ca94f14253f359f4025d031069db318c937c035989150a8d572e928" exitCode=137 Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.234656 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"613029232ca94f14253f359f4025d031069db318c937c035989150a8d572e928"} Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.234694 4704 scope.go:117] "RemoveContainer" containerID="970c92db06a89d50e1290dbb08841876dc61ae177b7d3a990044d4fe502e09bf" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.236568 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" event={"ID":"c66dadfe-8db4-46c6-b158-4c91bd49e66c","Type":"ContainerStarted","Data":"fd2495473d4853af3b12f03f8b12829e1735122f218c05638d8f960873a90df5"} Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.236610 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" event={"ID":"c66dadfe-8db4-46c6-b158-4c91bd49e66c","Type":"ContainerStarted","Data":"2b61737eb16b7607d018e0b68835ab218759b94815baf8c3d8289f1ecd412f5c"} Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.236821 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.240665 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" event={"ID":"1748f477-b0f3-476c-9f36-798e643df641","Type":"ContainerStarted","Data":"3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311"} Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.241033 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" event={"ID":"1748f477-b0f3-476c-9f36-798e643df641","Type":"ContainerStarted","Data":"8300c255df58c05b876de166378da0ab4fbf9aeb55c8fe557c92b6c2484ba800"} Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.241053 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.242073 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.245695 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.258585 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" podStartSLOduration=3.258569525 podStartE2EDuration="3.258569525s" podCreationTimestamp="2026-01-22 16:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:33:35.256597785 +0000 UTC m=+307.901144515" watchObservedRunningTime="2026-01-22 16:33:35.258569525 +0000 UTC m=+307.903116225" Jan 22 16:33:35 crc kubenswrapper[4704]: I0122 16:33:35.291867 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" podStartSLOduration=3.291853059 podStartE2EDuration="3.291853059s" podCreationTimestamp="2026-01-22 16:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:33:35.290663483 +0000 UTC m=+307.935210193" watchObservedRunningTime="2026-01-22 16:33:35.291853059 +0000 UTC m=+307.936399759" Jan 22 16:33:36 crc kubenswrapper[4704]: I0122 16:33:36.247151 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 22 16:33:36 crc kubenswrapper[4704]: I0122 16:33:36.249043 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aec613ce74dd831cf258edbff7b076aed10dbd8a8d35b197cea16988a6cfb625"} Jan 22 16:33:36 crc kubenswrapper[4704]: I0122 16:33:36.926176 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:33:36 crc kubenswrapper[4704]: I0122 16:33:36.927495 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 16:33:39 crc kubenswrapper[4704]: I0122 16:33:39.620293 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 16:33:45 crc kubenswrapper[4704]: I0122 16:33:45.017065 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:33:45 crc kubenswrapper[4704]: I0122 16:33:45.021934 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:33:45 crc kubenswrapper[4704]: I0122 16:33:45.307640 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 16:33:49 crc kubenswrapper[4704]: I0122 16:33:49.086364 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:33:49 crc kubenswrapper[4704]: I0122 16:33:49.087178 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:33:55 crc kubenswrapper[4704]: I0122 16:33:55.769352 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-zpx59"] Jan 22 16:33:55 crc kubenswrapper[4704]: I0122 16:33:55.770055 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" podUID="c66dadfe-8db4-46c6-b158-4c91bd49e66c" containerName="controller-manager" containerID="cri-o://fd2495473d4853af3b12f03f8b12829e1735122f218c05638d8f960873a90df5" gracePeriod=30 Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.373946 4704 generic.go:334] "Generic (PLEG): container finished" podID="c66dadfe-8db4-46c6-b158-4c91bd49e66c" containerID="fd2495473d4853af3b12f03f8b12829e1735122f218c05638d8f960873a90df5" exitCode=0 Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.374044 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" event={"ID":"c66dadfe-8db4-46c6-b158-4c91bd49e66c","Type":"ContainerDied","Data":"fd2495473d4853af3b12f03f8b12829e1735122f218c05638d8f960873a90df5"} Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.374365 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" event={"ID":"c66dadfe-8db4-46c6-b158-4c91bd49e66c","Type":"ContainerDied","Data":"2b61737eb16b7607d018e0b68835ab218759b94815baf8c3d8289f1ecd412f5c"} Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.374379 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b61737eb16b7607d018e0b68835ab218759b94815baf8c3d8289f1ecd412f5c" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.376692 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.575180 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhxqc\" (UniqueName: \"kubernetes.io/projected/c66dadfe-8db4-46c6-b158-4c91bd49e66c-kube-api-access-fhxqc\") pod \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.575269 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dadfe-8db4-46c6-b158-4c91bd49e66c-serving-cert\") pod \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.575315 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-proxy-ca-bundles\") pod \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.575409 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-client-ca\") pod \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.575441 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-config\") pod \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\" (UID: \"c66dadfe-8db4-46c6-b158-4c91bd49e66c\") " Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.576242 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c66dadfe-8db4-46c6-b158-4c91bd49e66c" (UID: "c66dadfe-8db4-46c6-b158-4c91bd49e66c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.576363 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-client-ca" (OuterVolumeSpecName: "client-ca") pod "c66dadfe-8db4-46c6-b158-4c91bd49e66c" (UID: "c66dadfe-8db4-46c6-b158-4c91bd49e66c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.576422 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-config" (OuterVolumeSpecName: "config") pod "c66dadfe-8db4-46c6-b158-4c91bd49e66c" (UID: "c66dadfe-8db4-46c6-b158-4c91bd49e66c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.583460 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66dadfe-8db4-46c6-b158-4c91bd49e66c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c66dadfe-8db4-46c6-b158-4c91bd49e66c" (UID: "c66dadfe-8db4-46c6-b158-4c91bd49e66c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.583770 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66dadfe-8db4-46c6-b158-4c91bd49e66c-kube-api-access-fhxqc" (OuterVolumeSpecName: "kube-api-access-fhxqc") pod "c66dadfe-8db4-46c6-b158-4c91bd49e66c" (UID: "c66dadfe-8db4-46c6-b158-4c91bd49e66c"). InnerVolumeSpecName "kube-api-access-fhxqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.676602 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhxqc\" (UniqueName: \"kubernetes.io/projected/c66dadfe-8db4-46c6-b158-4c91bd49e66c-kube-api-access-fhxqc\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.676637 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dadfe-8db4-46c6-b158-4c91bd49e66c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.676680 4704 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.676692 4704 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.676703 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dadfe-8db4-46c6-b158-4c91bd49e66c-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.875312 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-df48ff9c9-zn5qc"] Jan 22 16:33:56 crc kubenswrapper[4704]: E0122 16:33:56.875774 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66dadfe-8db4-46c6-b158-4c91bd49e66c" containerName="controller-manager" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.875834 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66dadfe-8db4-46c6-b158-4c91bd49e66c" containerName="controller-manager" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.876056 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66dadfe-8db4-46c6-b158-4c91bd49e66c" containerName="controller-manager" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.876771 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.886203 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-df48ff9c9-zn5qc"] Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.981285 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h8nv\" (UniqueName: \"kubernetes.io/projected/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-kube-api-access-5h8nv\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.981704 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-serving-cert\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.981912 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-proxy-ca-bundles\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.982099 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-client-ca\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:56 crc kubenswrapper[4704]: I0122 16:33:56.982321 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-config\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.082734 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-config\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.083007 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h8nv\" (UniqueName: \"kubernetes.io/projected/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-kube-api-access-5h8nv\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.083112 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-serving-cert\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.083197 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-proxy-ca-bundles\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.083319 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-client-ca\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.084039 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-config\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.084080 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-client-ca\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.084849 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-proxy-ca-bundles\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.087457 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-serving-cert\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.113052 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h8nv\" (UniqueName: \"kubernetes.io/projected/6dc98c77-ef10-48a0-9d8d-5feaf8262c0e-kube-api-access-5h8nv\") pod \"controller-manager-df48ff9c9-zn5qc\" (UID: \"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e\") " pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.197072 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.380924 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77f667dfdd-zpx59" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.415856 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-zpx59"] Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.419762 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77f667dfdd-zpx59"] Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.642815 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66dadfe-8db4-46c6-b158-4c91bd49e66c" path="/var/lib/kubelet/pods/c66dadfe-8db4-46c6-b158-4c91bd49e66c/volumes" Jan 22 16:33:57 crc kubenswrapper[4704]: I0122 16:33:57.660346 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-df48ff9c9-zn5qc"] Jan 22 16:33:57 crc kubenswrapper[4704]: W0122 16:33:57.669835 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dc98c77_ef10_48a0_9d8d_5feaf8262c0e.slice/crio-e2872bc868f3ef9d02c79d97aba0e5c3e2177eb5a88bd11a5b0683b724bb31e4 WatchSource:0}: Error finding container e2872bc868f3ef9d02c79d97aba0e5c3e2177eb5a88bd11a5b0683b724bb31e4: Status 404 returned error can't find the container with id e2872bc868f3ef9d02c79d97aba0e5c3e2177eb5a88bd11a5b0683b724bb31e4 Jan 22 16:33:58 crc kubenswrapper[4704]: I0122 16:33:58.385608 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" event={"ID":"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e","Type":"ContainerStarted","Data":"57416ddacbcf2cd6b844284e8440e1d74d1a0e6515c77b066b764a8a23570d1b"} Jan 22 16:33:58 crc kubenswrapper[4704]: I0122 16:33:58.385977 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:58 crc kubenswrapper[4704]: I0122 16:33:58.385993 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" event={"ID":"6dc98c77-ef10-48a0-9d8d-5feaf8262c0e","Type":"ContainerStarted","Data":"e2872bc868f3ef9d02c79d97aba0e5c3e2177eb5a88bd11a5b0683b724bb31e4"} Jan 22 16:33:58 crc kubenswrapper[4704]: I0122 16:33:58.390287 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" Jan 22 16:33:58 crc kubenswrapper[4704]: I0122 16:33:58.401564 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-df48ff9c9-zn5qc" podStartSLOduration=3.401547391 podStartE2EDuration="3.401547391s" podCreationTimestamp="2026-01-22 16:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:33:58.399843382 +0000 UTC m=+331.044390072" watchObservedRunningTime="2026-01-22 16:33:58.401547391 +0000 UTC m=+331.046094101" Jan 22 16:34:19 crc kubenswrapper[4704]: I0122 16:34:19.086937 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:34:19 crc kubenswrapper[4704]: I0122 16:34:19.087541 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.716811 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vvr6j"] Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.718154 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.739852 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vvr6j"] Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.863833 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-registry-tls\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.863904 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smmdh\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-kube-api-access-smmdh\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.864004 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-registry-certificates\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.864027 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-bound-sa-token\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.864079 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.864107 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-trusted-ca\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.864122 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.864179 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.897224 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.965290 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.965334 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-trusted-ca\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.965384 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-registry-tls\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.965411 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smmdh\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-kube-api-access-smmdh\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.965441 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-registry-certificates\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.965460 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-bound-sa-token\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.965482 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.966051 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.966595 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-trusted-ca\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.966701 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-registry-certificates\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.971637 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.971737 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-registry-tls\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.980358 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-bound-sa-token\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:27 crc kubenswrapper[4704]: I0122 16:34:27.982887 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smmdh\" (UniqueName: \"kubernetes.io/projected/39b2e5b3-252b-48af-9333-c7e0a9e9f36a-kube-api-access-smmdh\") pod \"image-registry-66df7c8f76-vvr6j\" (UID: \"39b2e5b3-252b-48af-9333-c7e0a9e9f36a\") " pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:28 crc kubenswrapper[4704]: I0122 16:34:28.036744 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:28 crc kubenswrapper[4704]: I0122 16:34:28.492893 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vvr6j"] Jan 22 16:34:28 crc kubenswrapper[4704]: I0122 16:34:28.568924 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" event={"ID":"39b2e5b3-252b-48af-9333-c7e0a9e9f36a","Type":"ContainerStarted","Data":"097f64b334a3dfeaa16a0aced9253d1d8bd87be9139e92adf3a7ff9e613ed0de"} Jan 22 16:34:29 crc kubenswrapper[4704]: I0122 16:34:29.575633 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" event={"ID":"39b2e5b3-252b-48af-9333-c7e0a9e9f36a","Type":"ContainerStarted","Data":"e171897a232ac1319dc18007418eca083eb3d29ce5797370ce508591d51ae44b"} Jan 22 16:34:29 crc kubenswrapper[4704]: I0122 16:34:29.576084 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:29 crc kubenswrapper[4704]: I0122 16:34:29.604018 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" podStartSLOduration=2.603957215 podStartE2EDuration="2.603957215s" podCreationTimestamp="2026-01-22 16:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:34:29.592184663 +0000 UTC m=+362.236731383" watchObservedRunningTime="2026-01-22 16:34:29.603957215 +0000 UTC m=+362.248503915" Jan 22 16:34:48 crc kubenswrapper[4704]: I0122 16:34:48.044862 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vvr6j" Jan 22 16:34:48 crc kubenswrapper[4704]: I0122 16:34:48.114641 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xvsbg"] Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.086310 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.086609 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.086656 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.088363 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"472b8c837b02223b278946b3b749c037d005e52a819017280faf01387d829462"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.088544 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://472b8c837b02223b278946b3b749c037d005e52a819017280faf01387d829462" gracePeriod=600 Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.706517 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="472b8c837b02223b278946b3b749c037d005e52a819017280faf01387d829462" exitCode=0 Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.706887 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"472b8c837b02223b278946b3b749c037d005e52a819017280faf01387d829462"} Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.706921 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"6ac46dd18c98dc20006d974213963bea845ef28d8c751b219281baa2762ee2d0"} Jan 22 16:34:49 crc kubenswrapper[4704]: I0122 16:34:49.706942 4704 scope.go:117] "RemoveContainer" containerID="a3474a98f0fc2bc16c44bd914b7024240296479fe187e66dee44eafe631a95c3" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.017607 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d"] Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.018384 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" podUID="1748f477-b0f3-476c-9f36-798e643df641" containerName="route-controller-manager" containerID="cri-o://3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311" gracePeriod=30 Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.395546 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.524770 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-client-ca\") pod \"1748f477-b0f3-476c-9f36-798e643df641\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.524893 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-config\") pod \"1748f477-b0f3-476c-9f36-798e643df641\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.524932 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5jxj\" (UniqueName: \"kubernetes.io/projected/1748f477-b0f3-476c-9f36-798e643df641-kube-api-access-q5jxj\") pod \"1748f477-b0f3-476c-9f36-798e643df641\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.525028 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1748f477-b0f3-476c-9f36-798e643df641-serving-cert\") pod \"1748f477-b0f3-476c-9f36-798e643df641\" (UID: \"1748f477-b0f3-476c-9f36-798e643df641\") " Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.525864 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-client-ca" (OuterVolumeSpecName: "client-ca") pod "1748f477-b0f3-476c-9f36-798e643df641" (UID: "1748f477-b0f3-476c-9f36-798e643df641"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.526165 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-config" (OuterVolumeSpecName: "config") pod "1748f477-b0f3-476c-9f36-798e643df641" (UID: "1748f477-b0f3-476c-9f36-798e643df641"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.530784 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1748f477-b0f3-476c-9f36-798e643df641-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1748f477-b0f3-476c-9f36-798e643df641" (UID: "1748f477-b0f3-476c-9f36-798e643df641"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.540976 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1748f477-b0f3-476c-9f36-798e643df641-kube-api-access-q5jxj" (OuterVolumeSpecName: "kube-api-access-q5jxj") pod "1748f477-b0f3-476c-9f36-798e643df641" (UID: "1748f477-b0f3-476c-9f36-798e643df641"). InnerVolumeSpecName "kube-api-access-q5jxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.626198 4704 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.626253 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1748f477-b0f3-476c-9f36-798e643df641-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.626270 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5jxj\" (UniqueName: \"kubernetes.io/projected/1748f477-b0f3-476c-9f36-798e643df641-kube-api-access-q5jxj\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.626283 4704 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1748f477-b0f3-476c-9f36-798e643df641-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.733869 4704 generic.go:334] "Generic (PLEG): container finished" podID="1748f477-b0f3-476c-9f36-798e643df641" containerID="3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311" exitCode=0 Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.733924 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" event={"ID":"1748f477-b0f3-476c-9f36-798e643df641","Type":"ContainerDied","Data":"3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311"} Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.733949 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.733967 4704 scope.go:117] "RemoveContainer" containerID="3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.733955 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d" event={"ID":"1748f477-b0f3-476c-9f36-798e643df641","Type":"ContainerDied","Data":"8300c255df58c05b876de166378da0ab4fbf9aeb55c8fe557c92b6c2484ba800"} Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.757257 4704 scope.go:117] "RemoveContainer" containerID="3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311" Jan 22 16:34:52 crc kubenswrapper[4704]: E0122 16:34:52.757892 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311\": container with ID starting with 3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311 not found: ID does not exist" containerID="3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.757922 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311"} err="failed to get container status \"3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311\": rpc error: code = NotFound desc = could not find container \"3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311\": container with ID starting with 3eabe5ef1b7d03e06199f4bc45b8865a6e77ed7c4dd492078b0b15ba10e75311 not found: ID does not exist" Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.772893 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d"] Jan 22 16:34:52 crc kubenswrapper[4704]: I0122 16:34:52.775611 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9fc89966-r7z6d"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.150933 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qlsl"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.151592 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8qlsl" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="registry-server" containerID="cri-o://9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58" gracePeriod=30 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.161247 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4kgkm"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.161649 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4kgkm" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="registry-server" containerID="cri-o://227df9cadaca59a33153bb852b88588d4c533eb00e3755842b3dc9f32ac3658d" gracePeriod=30 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.165027 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lx7sw"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.165234 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" containerID="cri-o://7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007" gracePeriod=30 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.172271 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrdrd"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.172567 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrdrd" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="registry-server" containerID="cri-o://91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b" gracePeriod=30 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.183116 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57zfj"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.183410 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-57zfj" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="registry-server" containerID="cri-o://c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e" gracePeriod=30 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.189074 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vs2mz"] Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.189297 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1748f477-b0f3-476c-9f36-798e643df641" containerName="route-controller-manager" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.189308 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="1748f477-b0f3-476c-9f36-798e643df641" containerName="route-controller-manager" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.189418 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="1748f477-b0f3-476c-9f36-798e643df641" containerName="route-controller-manager" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.189847 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.197357 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vs2mz"] Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.298399 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58 is running failed: container process not found" containerID="9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.299235 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58 is running failed: container process not found" containerID="9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.299640 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58 is running failed: container process not found" containerID="9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.299688 4704 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-8qlsl" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.340551 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t59f7\" (UniqueName: \"kubernetes.io/projected/40969928-6095-4242-80c7-a8daed2e28b1-kube-api-access-t59f7\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.340948 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40969928-6095-4242-80c7-a8daed2e28b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.340990 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/40969928-6095-4242-80c7-a8daed2e28b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.441602 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40969928-6095-4242-80c7-a8daed2e28b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.441644 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/40969928-6095-4242-80c7-a8daed2e28b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.441695 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t59f7\" (UniqueName: \"kubernetes.io/projected/40969928-6095-4242-80c7-a8daed2e28b1-kube-api-access-t59f7\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.442903 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40969928-6095-4242-80c7-a8daed2e28b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.454905 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/40969928-6095-4242-80c7-a8daed2e28b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.457229 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t59f7\" (UniqueName: \"kubernetes.io/projected/40969928-6095-4242-80c7-a8daed2e28b1-kube-api-access-t59f7\") pod \"marketplace-operator-79b997595-vs2mz\" (UID: \"40969928-6095-4242-80c7-a8daed2e28b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.512174 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.643371 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1748f477-b0f3-476c-9f36-798e643df641" path="/var/lib/kubelet/pods/1748f477-b0f3-476c-9f36-798e643df641/volumes" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.676367 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.738826 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.744758 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.745004 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-trusted-ca\") pod \"a30726df-cfa8-4da0-9aa6-419437441379\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.745053 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd4nh\" (UniqueName: \"kubernetes.io/projected/a30726df-cfa8-4da0-9aa6-419437441379-kube-api-access-fd4nh\") pod \"a30726df-cfa8-4da0-9aa6-419437441379\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.745139 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-operator-metrics\") pod \"a30726df-cfa8-4da0-9aa6-419437441379\" (UID: \"a30726df-cfa8-4da0-9aa6-419437441379\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.746634 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a30726df-cfa8-4da0-9aa6-419437441379" (UID: "a30726df-cfa8-4da0-9aa6-419437441379"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.751258 4704 generic.go:334] "Generic (PLEG): container finished" podID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerID="91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b" exitCode=0 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.751551 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrdrd" event={"ID":"137b8d6b-e852-4f81-992d-b5cc4b5ed519","Type":"ContainerDied","Data":"91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.751586 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrdrd" event={"ID":"137b8d6b-e852-4f81-992d-b5cc4b5ed519","Type":"ContainerDied","Data":"67599c03fb2fc56807ef58390af90aa0b59ed4e87c97331f84b4678435e85ee5"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.751605 4704 scope.go:117] "RemoveContainer" containerID="91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.751712 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrdrd" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.751988 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a30726df-cfa8-4da0-9aa6-419437441379" (UID: "a30726df-cfa8-4da0-9aa6-419437441379"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.755572 4704 generic.go:334] "Generic (PLEG): container finished" podID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerID="c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e" exitCode=0 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.755637 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57zfj" event={"ID":"d39c37f0-3471-4222-b3f0-b9947d334ef5","Type":"ContainerDied","Data":"c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.755668 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57zfj" event={"ID":"d39c37f0-3471-4222-b3f0-b9947d334ef5","Type":"ContainerDied","Data":"6235916ba9149102a0631dae5384bae0d09d5a9dbbe8bce24953b202969f0889"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.755747 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57zfj" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.757935 4704 generic.go:334] "Generic (PLEG): container finished" podID="798305b7-a0da-49f9-904a-265e215f1fea" containerID="9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58" exitCode=0 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.757986 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlsl" event={"ID":"798305b7-a0da-49f9-904a-265e215f1fea","Type":"ContainerDied","Data":"9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.768219 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30726df-cfa8-4da0-9aa6-419437441379-kube-api-access-fd4nh" (OuterVolumeSpecName: "kube-api-access-fd4nh") pod "a30726df-cfa8-4da0-9aa6-419437441379" (UID: "a30726df-cfa8-4da0-9aa6-419437441379"). InnerVolumeSpecName "kube-api-access-fd4nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.768690 4704 generic.go:334] "Generic (PLEG): container finished" podID="16980b70-91da-419b-b855-6a2551f62423" containerID="227df9cadaca59a33153bb852b88588d4c533eb00e3755842b3dc9f32ac3658d" exitCode=0 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.768854 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kgkm" event={"ID":"16980b70-91da-419b-b855-6a2551f62423","Type":"ContainerDied","Data":"227df9cadaca59a33153bb852b88588d4c533eb00e3755842b3dc9f32ac3658d"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.776444 4704 generic.go:334] "Generic (PLEG): container finished" podID="a30726df-cfa8-4da0-9aa6-419437441379" containerID="7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007" exitCode=0 Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.776566 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.776874 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" event={"ID":"a30726df-cfa8-4da0-9aa6-419437441379","Type":"ContainerDied","Data":"7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.776916 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lx7sw" event={"ID":"a30726df-cfa8-4da0-9aa6-419437441379","Type":"ContainerDied","Data":"74f4fc96cd3fb5ed9b46d3d7f546c8c660b51720ab45be891155e838ec3120a0"} Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.777042 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.789955 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.799146 4704 scope.go:117] "RemoveContainer" containerID="e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.813837 4704 scope.go:117] "RemoveContainer" containerID="12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847148 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-utilities\") pod \"16980b70-91da-419b-b855-6a2551f62423\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847202 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xlk4\" (UniqueName: \"kubernetes.io/projected/d39c37f0-3471-4222-b3f0-b9947d334ef5-kube-api-access-2xlk4\") pod \"d39c37f0-3471-4222-b3f0-b9947d334ef5\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847239 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsfpr\" (UniqueName: \"kubernetes.io/projected/137b8d6b-e852-4f81-992d-b5cc4b5ed519-kube-api-access-jsfpr\") pod \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847269 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-catalog-content\") pod \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847316 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-utilities\") pod \"d39c37f0-3471-4222-b3f0-b9947d334ef5\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847360 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-catalog-content\") pod \"798305b7-a0da-49f9-904a-265e215f1fea\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847402 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-utilities\") pod \"798305b7-a0da-49f9-904a-265e215f1fea\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847463 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt6wd\" (UniqueName: \"kubernetes.io/projected/16980b70-91da-419b-b855-6a2551f62423-kube-api-access-lt6wd\") pod \"16980b70-91da-419b-b855-6a2551f62423\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847493 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-catalog-content\") pod \"d39c37f0-3471-4222-b3f0-b9947d334ef5\" (UID: \"d39c37f0-3471-4222-b3f0-b9947d334ef5\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847526 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-utilities\") pod \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\" (UID: \"137b8d6b-e852-4f81-992d-b5cc4b5ed519\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847547 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-catalog-content\") pod \"16980b70-91da-419b-b855-6a2551f62423\" (UID: \"16980b70-91da-419b-b855-6a2551f62423\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847583 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk6gd\" (UniqueName: \"kubernetes.io/projected/798305b7-a0da-49f9-904a-265e215f1fea-kube-api-access-wk6gd\") pod \"798305b7-a0da-49f9-904a-265e215f1fea\" (UID: \"798305b7-a0da-49f9-904a-265e215f1fea\") " Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847928 4704 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847945 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd4nh\" (UniqueName: \"kubernetes.io/projected/a30726df-cfa8-4da0-9aa6-419437441379-kube-api-access-fd4nh\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.847957 4704 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a30726df-cfa8-4da0-9aa6-419437441379-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.851448 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-utilities" (OuterVolumeSpecName: "utilities") pod "137b8d6b-e852-4f81-992d-b5cc4b5ed519" (UID: "137b8d6b-e852-4f81-992d-b5cc4b5ed519"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.851624 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-utilities" (OuterVolumeSpecName: "utilities") pod "798305b7-a0da-49f9-904a-265e215f1fea" (UID: "798305b7-a0da-49f9-904a-265e215f1fea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.851937 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-utilities" (OuterVolumeSpecName: "utilities") pod "d39c37f0-3471-4222-b3f0-b9947d334ef5" (UID: "d39c37f0-3471-4222-b3f0-b9947d334ef5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.852724 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-utilities" (OuterVolumeSpecName: "utilities") pod "16980b70-91da-419b-b855-6a2551f62423" (UID: "16980b70-91da-419b-b855-6a2551f62423"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.852968 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137b8d6b-e852-4f81-992d-b5cc4b5ed519-kube-api-access-jsfpr" (OuterVolumeSpecName: "kube-api-access-jsfpr") pod "137b8d6b-e852-4f81-992d-b5cc4b5ed519" (UID: "137b8d6b-e852-4f81-992d-b5cc4b5ed519"). InnerVolumeSpecName "kube-api-access-jsfpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.854495 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39c37f0-3471-4222-b3f0-b9947d334ef5-kube-api-access-2xlk4" (OuterVolumeSpecName: "kube-api-access-2xlk4") pod "d39c37f0-3471-4222-b3f0-b9947d334ef5" (UID: "d39c37f0-3471-4222-b3f0-b9947d334ef5"). InnerVolumeSpecName "kube-api-access-2xlk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.855110 4704 scope.go:117] "RemoveContainer" containerID="91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.855944 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b\": container with ID starting with 91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b not found: ID does not exist" containerID="91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.855981 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b"} err="failed to get container status \"91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b\": rpc error: code = NotFound desc = could not find container \"91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b\": container with ID starting with 91cd90f15f299910207e189c681b367917f184bccb602fcaa876dbb5cb64177b not found: ID does not exist" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.856012 4704 scope.go:117] "RemoveContainer" containerID="e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.856519 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb\": container with ID starting with e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb not found: ID does not exist" containerID="e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.856566 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb"} err="failed to get container status \"e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb\": rpc error: code = NotFound desc = could not find container \"e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb\": container with ID starting with e0802e368334e98ce85e2125cd81cc960ff9f5c88a033bae59dbdeb1c7863eeb not found: ID does not exist" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.856587 4704 scope.go:117] "RemoveContainer" containerID="12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.856951 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0\": container with ID starting with 12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0 not found: ID does not exist" containerID="12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.856974 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0"} err="failed to get container status \"12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0\": rpc error: code = NotFound desc = could not find container \"12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0\": container with ID starting with 12e867d743d9a461c208763cacbf817099bc030a8f8c3ac76bc20705c4d9f1a0 not found: ID does not exist" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.856992 4704 scope.go:117] "RemoveContainer" containerID="c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.857157 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16980b70-91da-419b-b855-6a2551f62423-kube-api-access-lt6wd" (OuterVolumeSpecName: "kube-api-access-lt6wd") pod "16980b70-91da-419b-b855-6a2551f62423" (UID: "16980b70-91da-419b-b855-6a2551f62423"). InnerVolumeSpecName "kube-api-access-lt6wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.863677 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lx7sw"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.867750 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lx7sw"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.870398 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798305b7-a0da-49f9-904a-265e215f1fea-kube-api-access-wk6gd" (OuterVolumeSpecName: "kube-api-access-wk6gd") pod "798305b7-a0da-49f9-904a-265e215f1fea" (UID: "798305b7-a0da-49f9-904a-265e215f1fea"). InnerVolumeSpecName "kube-api-access-wk6gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.873507 4704 scope.go:117] "RemoveContainer" containerID="8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.876970 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "137b8d6b-e852-4f81-992d-b5cc4b5ed519" (UID: "137b8d6b-e852-4f81-992d-b5cc4b5ed519"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.890551 4704 scope.go:117] "RemoveContainer" containerID="7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.907569 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16980b70-91da-419b-b855-6a2551f62423" (UID: "16980b70-91da-419b-b855-6a2551f62423"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.912396 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "798305b7-a0da-49f9-904a-265e215f1fea" (UID: "798305b7-a0da-49f9-904a-265e215f1fea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.914848 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s"] Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915060 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915075 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915089 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915096 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915104 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915110 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915123 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915129 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915137 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915144 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915152 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915160 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915167 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915173 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915182 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915187 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915196 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915202 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915213 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915219 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="extract-utilities" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915226 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915232 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915242 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915247 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915255 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915260 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="extract-content" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.915268 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915274 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915368 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="798305b7-a0da-49f9-904a-265e215f1fea" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915377 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915386 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915394 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="16980b70-91da-419b-b855-6a2551f62423" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915401 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" containerName="registry-server" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.915970 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.917542 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.918155 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.918487 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.918499 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.919657 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.919897 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.923514 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s"] Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.927741 4704 scope.go:117] "RemoveContainer" containerID="c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.928320 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e\": container with ID starting with c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e not found: ID does not exist" containerID="c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.928353 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e"} err="failed to get container status \"c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e\": rpc error: code = NotFound desc = could not find container \"c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e\": container with ID starting with c2cf8921c4991a00237d5b2de8467c074a4eb01b633ea2c9878b93099d962e7e not found: ID does not exist" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.928380 4704 scope.go:117] "RemoveContainer" containerID="8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.932338 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3\": container with ID starting with 8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3 not found: ID does not exist" containerID="8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.932386 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3"} err="failed to get container status \"8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3\": rpc error: code = NotFound desc = could not find container \"8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3\": container with ID starting with 8f1ad3f4a3a6145dc795cba3fa5c67adab6e99222a46ebc244414d653bf209b3 not found: ID does not exist" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.932421 4704 scope.go:117] "RemoveContainer" containerID="7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.933649 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3\": container with ID starting with 7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3 not found: ID does not exist" containerID="7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.933699 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3"} err="failed to get container status \"7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3\": rpc error: code = NotFound desc = could not find container \"7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3\": container with ID starting with 7187da5c23c6b18a80148f3a33682af43e6bf13e484b81d79fde169821a6e2a3 not found: ID does not exist" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.933715 4704 scope.go:117] "RemoveContainer" containerID="7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949106 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949147 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt6wd\" (UniqueName: \"kubernetes.io/projected/16980b70-91da-419b-b855-6a2551f62423-kube-api-access-lt6wd\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949162 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949173 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949185 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk6gd\" (UniqueName: \"kubernetes.io/projected/798305b7-a0da-49f9-904a-265e215f1fea-kube-api-access-wk6gd\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949197 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16980b70-91da-419b-b855-6a2551f62423-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949208 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xlk4\" (UniqueName: \"kubernetes.io/projected/d39c37f0-3471-4222-b3f0-b9947d334ef5-kube-api-access-2xlk4\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949221 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsfpr\" (UniqueName: \"kubernetes.io/projected/137b8d6b-e852-4f81-992d-b5cc4b5ed519-kube-api-access-jsfpr\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949231 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137b8d6b-e852-4f81-992d-b5cc4b5ed519-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949241 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.949252 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798305b7-a0da-49f9-904a-265e215f1fea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.951382 4704 scope.go:117] "RemoveContainer" containerID="0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.963522 4704 scope.go:117] "RemoveContainer" containerID="7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.964059 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007\": container with ID starting with 7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007 not found: ID does not exist" containerID="7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.964112 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007"} err="failed to get container status \"7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007\": rpc error: code = NotFound desc = could not find container \"7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007\": container with ID starting with 7ead3e9a3d635f7d740106e01758b269e6883b89753cb2516681b58e88c95007 not found: ID does not exist" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.964147 4704 scope.go:117] "RemoveContainer" containerID="0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8" Jan 22 16:34:53 crc kubenswrapper[4704]: E0122 16:34:53.964507 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8\": container with ID starting with 0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8 not found: ID does not exist" containerID="0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8" Jan 22 16:34:53 crc kubenswrapper[4704]: I0122 16:34:53.964548 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8"} err="failed to get container status \"0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8\": rpc error: code = NotFound desc = could not find container \"0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8\": container with ID starting with 0d1afc5502f4def63966520418c15215b21d533a2cdbcbe43d29d17f6f8732f8 not found: ID does not exist" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.022218 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d39c37f0-3471-4222-b3f0-b9947d334ef5" (UID: "d39c37f0-3471-4222-b3f0-b9947d334ef5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.035549 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vs2mz"] Jan 22 16:34:54 crc kubenswrapper[4704]: W0122 16:34:54.038380 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40969928_6095_4242_80c7_a8daed2e28b1.slice/crio-6928d158d22f1a65992a6b07f2feb7a950d68eed2a667915a35af9472cc4a07c WatchSource:0}: Error finding container 6928d158d22f1a65992a6b07f2feb7a950d68eed2a667915a35af9472cc4a07c: Status 404 returned error can't find the container with id 6928d158d22f1a65992a6b07f2feb7a950d68eed2a667915a35af9472cc4a07c Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.049987 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-client-ca\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.050833 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4jh\" (UniqueName: \"kubernetes.io/projected/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-kube-api-access-cr4jh\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.050923 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-serving-cert\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.050961 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-config\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.051060 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39c37f0-3471-4222-b3f0-b9947d334ef5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.086961 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrdrd"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.109455 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrdrd"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.113267 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57zfj"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.116950 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-57zfj"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.152069 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4jh\" (UniqueName: \"kubernetes.io/projected/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-kube-api-access-cr4jh\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.152211 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-serving-cert\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.152276 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-config\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.152321 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-client-ca\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.153713 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-client-ca\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.155906 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-config\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.160433 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-serving-cert\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.172163 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4jh\" (UniqueName: \"kubernetes.io/projected/4cd51da6-e8ef-432c-9e1e-5b1e378632d4-kube-api-access-cr4jh\") pod \"route-controller-manager-69f8f665-nwk7s\" (UID: \"4cd51da6-e8ef-432c-9e1e-5b1e378632d4\") " pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.234334 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.654866 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.768049 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7ct"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.768349 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30726df-cfa8-4da0-9aa6-419437441379" containerName="marketplace-operator" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.769406 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.771110 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.781826 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7ct"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.788678 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" event={"ID":"40969928-6095-4242-80c7-a8daed2e28b1","Type":"ContainerStarted","Data":"5ce146d37bfebd7f09c082eeb83f27aa8b4efa2c90775613ca3dc266fb9df049"} Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.788734 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" event={"ID":"40969928-6095-4242-80c7-a8daed2e28b1","Type":"ContainerStarted","Data":"6928d158d22f1a65992a6b07f2feb7a950d68eed2a667915a35af9472cc4a07c"} Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.789378 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.794426 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qlsl" event={"ID":"798305b7-a0da-49f9-904a-265e215f1fea","Type":"ContainerDied","Data":"0f4826ebcbfb58a5bc84bb2987df1347967d07a03345bf435a95fa3374c9408f"} Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.794478 4704 scope.go:117] "RemoveContainer" containerID="9e8ce6c6209a69a47a0d12c15f1d30beb1320a55ab08a69756176f5b74464f58" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.794601 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qlsl" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.798179 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.800367 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4kgkm" event={"ID":"16980b70-91da-419b-b855-6a2551f62423","Type":"ContainerDied","Data":"5239e85c73fb66224a9000fc0f8c7e1537fd2d565fd1878c0112b89139367b80"} Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.800832 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4kgkm" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.810212 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" event={"ID":"4cd51da6-e8ef-432c-9e1e-5b1e378632d4","Type":"ContainerStarted","Data":"4fa57497ec55da2dff869939e287f04b692d26d74ad049e71cffc15e1b30c6b3"} Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.814166 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vs2mz" podStartSLOduration=1.814151676 podStartE2EDuration="1.814151676s" podCreationTimestamp="2026-01-22 16:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:34:54.811963223 +0000 UTC m=+387.456509943" watchObservedRunningTime="2026-01-22 16:34:54.814151676 +0000 UTC m=+387.458698376" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.823617 4704 scope.go:117] "RemoveContainer" containerID="7b7903df8c0314805afdd70a19a9a3175d02f9d70afdc58d4dc886c594447b63" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.847776 4704 scope.go:117] "RemoveContainer" containerID="89756bb79d0c08e07305dab603a0c4f5129c0878286271e2bbd77bae9f5ad541" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.860770 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlxd\" (UniqueName: \"kubernetes.io/projected/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-kube-api-access-fxlxd\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.860838 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-catalog-content\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.860905 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-utilities\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.868603 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qlsl"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.875211 4704 scope.go:117] "RemoveContainer" containerID="227df9cadaca59a33153bb852b88588d4c533eb00e3755842b3dc9f32ac3658d" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.879460 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8qlsl"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.884079 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4kgkm"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.887334 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4kgkm"] Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.891984 4704 scope.go:117] "RemoveContainer" containerID="c340af381901978caf447bf2db61ecda2dd7ef72676196cd2f53a6a56e51306f" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.912303 4704 scope.go:117] "RemoveContainer" containerID="bfb1d03b6f4171a4efa04ae01fe1a3253c631249b4d6b91fe8f4d3a612e5405a" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.962551 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-utilities\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.962979 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-utilities\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.963243 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxlxd\" (UniqueName: \"kubernetes.io/projected/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-kube-api-access-fxlxd\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.963285 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-catalog-content\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.963602 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-catalog-content\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:54 crc kubenswrapper[4704]: I0122 16:34:54.983118 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxlxd\" (UniqueName: \"kubernetes.io/projected/4841bd3f-e66d-4d5b-8eef-7d7584d19c79-kube-api-access-fxlxd\") pod \"redhat-marketplace-rc7ct\" (UID: \"4841bd3f-e66d-4d5b-8eef-7d7584d19c79\") " pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.085832 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.475352 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rc7ct"] Jan 22 16:34:55 crc kubenswrapper[4704]: W0122 16:34:55.489968 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4841bd3f_e66d_4d5b_8eef_7d7584d19c79.slice/crio-e517c0767a0a713e101fb854afb8eace000848161f678d87e9620bc8fb4092ea WatchSource:0}: Error finding container e517c0767a0a713e101fb854afb8eace000848161f678d87e9620bc8fb4092ea: Status 404 returned error can't find the container with id e517c0767a0a713e101fb854afb8eace000848161f678d87e9620bc8fb4092ea Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.639846 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137b8d6b-e852-4f81-992d-b5cc4b5ed519" path="/var/lib/kubelet/pods/137b8d6b-e852-4f81-992d-b5cc4b5ed519/volumes" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.640656 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16980b70-91da-419b-b855-6a2551f62423" path="/var/lib/kubelet/pods/16980b70-91da-419b-b855-6a2551f62423/volumes" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.641255 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798305b7-a0da-49f9-904a-265e215f1fea" path="/var/lib/kubelet/pods/798305b7-a0da-49f9-904a-265e215f1fea/volumes" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.641876 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30726df-cfa8-4da0-9aa6-419437441379" path="/var/lib/kubelet/pods/a30726df-cfa8-4da0-9aa6-419437441379/volumes" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.642331 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39c37f0-3471-4222-b3f0-b9947d334ef5" path="/var/lib/kubelet/pods/d39c37f0-3471-4222-b3f0-b9947d334ef5/volumes" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.762498 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dnbwc"] Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.763554 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.768920 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.772255 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnbwc"] Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.820748 4704 generic.go:334] "Generic (PLEG): container finished" podID="4841bd3f-e66d-4d5b-8eef-7d7584d19c79" containerID="63eeec6971d643cd3f209ec08ef93b9432b727f1e5152bdc400fbc33301949fc" exitCode=0 Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.820830 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7ct" event={"ID":"4841bd3f-e66d-4d5b-8eef-7d7584d19c79","Type":"ContainerDied","Data":"63eeec6971d643cd3f209ec08ef93b9432b727f1e5152bdc400fbc33301949fc"} Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.820897 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7ct" event={"ID":"4841bd3f-e66d-4d5b-8eef-7d7584d19c79","Type":"ContainerStarted","Data":"e517c0767a0a713e101fb854afb8eace000848161f678d87e9620bc8fb4092ea"} Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.824873 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" event={"ID":"4cd51da6-e8ef-432c-9e1e-5b1e378632d4","Type":"ContainerStarted","Data":"c2be076e22723c67af2b7af3a3c7c4eaded215caa038230955cafcca342a47a3"} Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.825219 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.829882 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.874558 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-catalog-content\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.874622 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-utilities\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.874643 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z5ln\" (UniqueName: \"kubernetes.io/projected/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-kube-api-access-7z5ln\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.976567 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-catalog-content\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.976997 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-utilities\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.977035 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z5ln\" (UniqueName: \"kubernetes.io/projected/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-kube-api-access-7z5ln\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.978328 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-catalog-content\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.978741 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-utilities\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:55 crc kubenswrapper[4704]: I0122 16:34:55.996835 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z5ln\" (UniqueName: \"kubernetes.io/projected/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-kube-api-access-7z5ln\") pod \"redhat-operators-dnbwc\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:56 crc kubenswrapper[4704]: I0122 16:34:56.093202 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:34:56 crc kubenswrapper[4704]: I0122 16:34:56.471257 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69f8f665-nwk7s" podStartSLOduration=4.471235392 podStartE2EDuration="4.471235392s" podCreationTimestamp="2026-01-22 16:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:34:55.870024119 +0000 UTC m=+388.514570819" watchObservedRunningTime="2026-01-22 16:34:56.471235392 +0000 UTC m=+389.115782092" Jan 22 16:34:56 crc kubenswrapper[4704]: I0122 16:34:56.474055 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnbwc"] Jan 22 16:34:56 crc kubenswrapper[4704]: W0122 16:34:56.481375 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a29fc77_1872_44d7_b2a2_9c0f3a13f1da.slice/crio-edcc92696e7ee121f3c2056fdb5fc081d792e5036130bfe912089d6d513ed2e4 WatchSource:0}: Error finding container edcc92696e7ee121f3c2056fdb5fc081d792e5036130bfe912089d6d513ed2e4: Status 404 returned error can't find the container with id edcc92696e7ee121f3c2056fdb5fc081d792e5036130bfe912089d6d513ed2e4 Jan 22 16:34:56 crc kubenswrapper[4704]: I0122 16:34:56.852018 4704 generic.go:334] "Generic (PLEG): container finished" podID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerID="28215890624c53097fa338097109e6ab52a3e91d3b34edc725ea5dd28eff3762" exitCode=0 Jan 22 16:34:56 crc kubenswrapper[4704]: I0122 16:34:56.852136 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnbwc" event={"ID":"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da","Type":"ContainerDied","Data":"28215890624c53097fa338097109e6ab52a3e91d3b34edc725ea5dd28eff3762"} Jan 22 16:34:56 crc kubenswrapper[4704]: I0122 16:34:56.852215 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnbwc" event={"ID":"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da","Type":"ContainerStarted","Data":"edcc92696e7ee121f3c2056fdb5fc081d792e5036130bfe912089d6d513ed2e4"} Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.162955 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tz6jz"] Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.164220 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.165833 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.174291 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tz6jz"] Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.315100 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c24550-b095-488a-b3f7-773bcdb8c773-utilities\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.315182 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhhr\" (UniqueName: \"kubernetes.io/projected/17c24550-b095-488a-b3f7-773bcdb8c773-kube-api-access-wzhhr\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.315231 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c24550-b095-488a-b3f7-773bcdb8c773-catalog-content\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.416961 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c24550-b095-488a-b3f7-773bcdb8c773-utilities\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.417036 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzhhr\" (UniqueName: \"kubernetes.io/projected/17c24550-b095-488a-b3f7-773bcdb8c773-kube-api-access-wzhhr\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.417080 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c24550-b095-488a-b3f7-773bcdb8c773-catalog-content\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.417556 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17c24550-b095-488a-b3f7-773bcdb8c773-catalog-content\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.417974 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17c24550-b095-488a-b3f7-773bcdb8c773-utilities\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.435228 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzhhr\" (UniqueName: \"kubernetes.io/projected/17c24550-b095-488a-b3f7-773bcdb8c773-kube-api-access-wzhhr\") pod \"community-operators-tz6jz\" (UID: \"17c24550-b095-488a-b3f7-773bcdb8c773\") " pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.526693 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.861070 4704 generic.go:334] "Generic (PLEG): container finished" podID="4841bd3f-e66d-4d5b-8eef-7d7584d19c79" containerID="ea88171f490f9a1c5444153e3072440c6feeeddb1af060079de4af3edf3b2acd" exitCode=0 Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.861125 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7ct" event={"ID":"4841bd3f-e66d-4d5b-8eef-7d7584d19c79","Type":"ContainerDied","Data":"ea88171f490f9a1c5444153e3072440c6feeeddb1af060079de4af3edf3b2acd"} Jan 22 16:34:57 crc kubenswrapper[4704]: I0122 16:34:57.968118 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tz6jz"] Jan 22 16:34:57 crc kubenswrapper[4704]: W0122 16:34:57.977126 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c24550_b095_488a_b3f7_773bcdb8c773.slice/crio-625e6097a0f92bfbf872eae794b5b7027cf71fbf57bb92a5f592161f5c72ddfc WatchSource:0}: Error finding container 625e6097a0f92bfbf872eae794b5b7027cf71fbf57bb92a5f592161f5c72ddfc: Status 404 returned error can't find the container with id 625e6097a0f92bfbf872eae794b5b7027cf71fbf57bb92a5f592161f5c72ddfc Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.167253 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-df2wn"] Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.169164 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.171234 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.182726 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-df2wn"] Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.227747 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-utilities\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.228496 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-catalog-content\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.228630 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zln7p\" (UniqueName: \"kubernetes.io/projected/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-kube-api-access-zln7p\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.330028 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-catalog-content\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.330088 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zln7p\" (UniqueName: \"kubernetes.io/projected/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-kube-api-access-zln7p\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.330168 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-utilities\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.330606 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-utilities\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.330693 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-catalog-content\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.351647 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zln7p\" (UniqueName: \"kubernetes.io/projected/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-kube-api-access-zln7p\") pod \"certified-operators-df2wn\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.510892 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.868908 4704 generic.go:334] "Generic (PLEG): container finished" podID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerID="c16a69e4bddcc9e7bb9fdeb6fad9692fca118c997367232c4e3ad680c4010c2b" exitCode=0 Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.868991 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnbwc" event={"ID":"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da","Type":"ContainerDied","Data":"c16a69e4bddcc9e7bb9fdeb6fad9692fca118c997367232c4e3ad680c4010c2b"} Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.870488 4704 generic.go:334] "Generic (PLEG): container finished" podID="17c24550-b095-488a-b3f7-773bcdb8c773" containerID="a1c95d5b2cdc772a0ff7a64a884d00615c30e6efb8476bac395ca4959abd9902" exitCode=0 Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.870523 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz6jz" event={"ID":"17c24550-b095-488a-b3f7-773bcdb8c773","Type":"ContainerDied","Data":"a1c95d5b2cdc772a0ff7a64a884d00615c30e6efb8476bac395ca4959abd9902"} Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.870581 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz6jz" event={"ID":"17c24550-b095-488a-b3f7-773bcdb8c773","Type":"ContainerStarted","Data":"625e6097a0f92bfbf872eae794b5b7027cf71fbf57bb92a5f592161f5c72ddfc"} Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.874288 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rc7ct" event={"ID":"4841bd3f-e66d-4d5b-8eef-7d7584d19c79","Type":"ContainerStarted","Data":"6adc639e332845d4d0dd584ecd6067b4e8b93fb1f43e286ba64d352066fac94d"} Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.910851 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rc7ct" podStartSLOduration=2.321152123 podStartE2EDuration="4.910834911s" podCreationTimestamp="2026-01-22 16:34:54 +0000 UTC" firstStartedPulling="2026-01-22 16:34:55.822604801 +0000 UTC m=+388.467151501" lastFinishedPulling="2026-01-22 16:34:58.412287589 +0000 UTC m=+391.056834289" observedRunningTime="2026-01-22 16:34:58.907029951 +0000 UTC m=+391.551576651" watchObservedRunningTime="2026-01-22 16:34:58.910834911 +0000 UTC m=+391.555381611" Jan 22 16:34:58 crc kubenswrapper[4704]: I0122 16:34:58.947722 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-df2wn"] Jan 22 16:34:58 crc kubenswrapper[4704]: W0122 16:34:58.955094 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f7f834d_3a2e_41b1_9b80_6cc0911843a8.slice/crio-e8973c6df9e98e389fa61a54d95d71b6a9911acbbf0b4b8192f1945d35d376d1 WatchSource:0}: Error finding container e8973c6df9e98e389fa61a54d95d71b6a9911acbbf0b4b8192f1945d35d376d1: Status 404 returned error can't find the container with id e8973c6df9e98e389fa61a54d95d71b6a9911acbbf0b4b8192f1945d35d376d1 Jan 22 16:34:59 crc kubenswrapper[4704]: I0122 16:34:59.885053 4704 generic.go:334] "Generic (PLEG): container finished" podID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerID="244beae2cb72799177c34274bd654565eb98ed62db5eb58af574891ef96c9c77" exitCode=0 Jan 22 16:34:59 crc kubenswrapper[4704]: I0122 16:34:59.885147 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df2wn" event={"ID":"5f7f834d-3a2e-41b1-9b80-6cc0911843a8","Type":"ContainerDied","Data":"244beae2cb72799177c34274bd654565eb98ed62db5eb58af574891ef96c9c77"} Jan 22 16:34:59 crc kubenswrapper[4704]: I0122 16:34:59.885551 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df2wn" event={"ID":"5f7f834d-3a2e-41b1-9b80-6cc0911843a8","Type":"ContainerStarted","Data":"e8973c6df9e98e389fa61a54d95d71b6a9911acbbf0b4b8192f1945d35d376d1"} Jan 22 16:34:59 crc kubenswrapper[4704]: I0122 16:34:59.892183 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz6jz" event={"ID":"17c24550-b095-488a-b3f7-773bcdb8c773","Type":"ContainerStarted","Data":"51e8bcc9879e69daadcd68c51173499b4111d2cec4253076695abf188f3ac52b"} Jan 22 16:34:59 crc kubenswrapper[4704]: I0122 16:34:59.896022 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnbwc" event={"ID":"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da","Type":"ContainerStarted","Data":"64af9810cbd9e0475238f64f4ddc09adc77e4eb376204a36f7c5997b106cb79c"} Jan 22 16:34:59 crc kubenswrapper[4704]: I0122 16:34:59.929818 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dnbwc" podStartSLOduration=2.298196744 podStartE2EDuration="4.92978032s" podCreationTimestamp="2026-01-22 16:34:55 +0000 UTC" firstStartedPulling="2026-01-22 16:34:56.853744454 +0000 UTC m=+389.498291154" lastFinishedPulling="2026-01-22 16:34:59.48532803 +0000 UTC m=+392.129874730" observedRunningTime="2026-01-22 16:34:59.925460854 +0000 UTC m=+392.570007564" watchObservedRunningTime="2026-01-22 16:34:59.92978032 +0000 UTC m=+392.574327020" Jan 22 16:35:00 crc kubenswrapper[4704]: I0122 16:35:00.903249 4704 generic.go:334] "Generic (PLEG): container finished" podID="17c24550-b095-488a-b3f7-773bcdb8c773" containerID="51e8bcc9879e69daadcd68c51173499b4111d2cec4253076695abf188f3ac52b" exitCode=0 Jan 22 16:35:00 crc kubenswrapper[4704]: I0122 16:35:00.903349 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz6jz" event={"ID":"17c24550-b095-488a-b3f7-773bcdb8c773","Type":"ContainerDied","Data":"51e8bcc9879e69daadcd68c51173499b4111d2cec4253076695abf188f3ac52b"} Jan 22 16:35:01 crc kubenswrapper[4704]: I0122 16:35:01.910592 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz6jz" event={"ID":"17c24550-b095-488a-b3f7-773bcdb8c773","Type":"ContainerStarted","Data":"501e5cf6e02be7af8f899296b451b437fa9859fc97235a764769137d77e85a8d"} Jan 22 16:35:01 crc kubenswrapper[4704]: I0122 16:35:01.913459 4704 generic.go:334] "Generic (PLEG): container finished" podID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerID="7ea43522f3392d937da2b4561c886add7c83eaa3552bdbf538c892b6e236eac0" exitCode=0 Jan 22 16:35:01 crc kubenswrapper[4704]: I0122 16:35:01.913502 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df2wn" event={"ID":"5f7f834d-3a2e-41b1-9b80-6cc0911843a8","Type":"ContainerDied","Data":"7ea43522f3392d937da2b4561c886add7c83eaa3552bdbf538c892b6e236eac0"} Jan 22 16:35:01 crc kubenswrapper[4704]: I0122 16:35:01.930653 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tz6jz" podStartSLOduration=2.15760071 podStartE2EDuration="4.930632233s" podCreationTimestamp="2026-01-22 16:34:57 +0000 UTC" firstStartedPulling="2026-01-22 16:34:58.871437477 +0000 UTC m=+391.515984177" lastFinishedPulling="2026-01-22 16:35:01.644469 +0000 UTC m=+394.289015700" observedRunningTime="2026-01-22 16:35:01.927058349 +0000 UTC m=+394.571605069" watchObservedRunningTime="2026-01-22 16:35:01.930632233 +0000 UTC m=+394.575178933" Jan 22 16:35:02 crc kubenswrapper[4704]: I0122 16:35:02.920760 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df2wn" event={"ID":"5f7f834d-3a2e-41b1-9b80-6cc0911843a8","Type":"ContainerStarted","Data":"33d508da5314958930af712b64252d236001d921731ec4bb77574dbb7c49cca5"} Jan 22 16:35:02 crc kubenswrapper[4704]: I0122 16:35:02.939766 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-df2wn" podStartSLOduration=2.526506994 podStartE2EDuration="4.939752127s" podCreationTimestamp="2026-01-22 16:34:58 +0000 UTC" firstStartedPulling="2026-01-22 16:34:59.887269435 +0000 UTC m=+392.531816135" lastFinishedPulling="2026-01-22 16:35:02.300514578 +0000 UTC m=+394.945061268" observedRunningTime="2026-01-22 16:35:02.939120279 +0000 UTC m=+395.583666989" watchObservedRunningTime="2026-01-22 16:35:02.939752127 +0000 UTC m=+395.584298827" Jan 22 16:35:05 crc kubenswrapper[4704]: I0122 16:35:05.086409 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:35:05 crc kubenswrapper[4704]: I0122 16:35:05.086979 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:35:05 crc kubenswrapper[4704]: I0122 16:35:05.134744 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:35:05 crc kubenswrapper[4704]: I0122 16:35:05.978978 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rc7ct" Jan 22 16:35:06 crc kubenswrapper[4704]: I0122 16:35:06.093863 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:35:06 crc kubenswrapper[4704]: I0122 16:35:06.093959 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:35:06 crc kubenswrapper[4704]: I0122 16:35:06.131929 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:35:06 crc kubenswrapper[4704]: I0122 16:35:06.999385 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:35:07 crc kubenswrapper[4704]: I0122 16:35:07.528302 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:35:07 crc kubenswrapper[4704]: I0122 16:35:07.528363 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:35:07 crc kubenswrapper[4704]: I0122 16:35:07.567272 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:35:07 crc kubenswrapper[4704]: I0122 16:35:07.991427 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tz6jz" Jan 22 16:35:08 crc kubenswrapper[4704]: I0122 16:35:08.512094 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:35:08 crc kubenswrapper[4704]: I0122 16:35:08.512617 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:35:08 crc kubenswrapper[4704]: I0122 16:35:08.566501 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:35:09 crc kubenswrapper[4704]: I0122 16:35:09.008740 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:35:13 crc kubenswrapper[4704]: I0122 16:35:13.158683 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" podUID="6ded330b-1278-4aea-8eb7-711847e9a54e" containerName="registry" containerID="cri-o://a573e292f139e90dedf58db572cd3d04d932569566aace4c266733f4d8c9214f" gracePeriod=30 Jan 22 16:35:13 crc kubenswrapper[4704]: I0122 16:35:13.311426 4704 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-xvsbg container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.32:5000/healthz\": dial tcp 10.217.0.32:5000: connect: connection refused" start-of-body= Jan 22 16:35:13 crc kubenswrapper[4704]: I0122 16:35:13.311487 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" podUID="6ded330b-1278-4aea-8eb7-711847e9a54e" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.32:5000/healthz\": dial tcp 10.217.0.32:5000: connect: connection refused" Jan 22 16:35:16 crc kubenswrapper[4704]: I0122 16:35:16.999276 4704 generic.go:334] "Generic (PLEG): container finished" podID="6ded330b-1278-4aea-8eb7-711847e9a54e" containerID="a573e292f139e90dedf58db572cd3d04d932569566aace4c266733f4d8c9214f" exitCode=0 Jan 22 16:35:16 crc kubenswrapper[4704]: I0122 16:35:16.999525 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" event={"ID":"6ded330b-1278-4aea-8eb7-711847e9a54e","Type":"ContainerDied","Data":"a573e292f139e90dedf58db572cd3d04d932569566aace4c266733f4d8c9214f"} Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.283392 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412280 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-tls\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412360 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvj2j\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-kube-api-access-nvj2j\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412390 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-trusted-ca\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412589 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412608 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ded330b-1278-4aea-8eb7-711847e9a54e-ca-trust-extracted\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412638 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-certificates\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412659 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ded330b-1278-4aea-8eb7-711847e9a54e-installation-pull-secrets\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.412699 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-bound-sa-token\") pod \"6ded330b-1278-4aea-8eb7-711847e9a54e\" (UID: \"6ded330b-1278-4aea-8eb7-711847e9a54e\") " Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.413436 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.413669 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.418499 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-kube-api-access-nvj2j" (OuterVolumeSpecName: "kube-api-access-nvj2j") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "kube-api-access-nvj2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.421141 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.421877 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.427717 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.428786 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ded330b-1278-4aea-8eb7-711847e9a54e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.440916 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ded330b-1278-4aea-8eb7-711847e9a54e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6ded330b-1278-4aea-8eb7-711847e9a54e" (UID: "6ded330b-1278-4aea-8eb7-711847e9a54e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.520195 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvj2j\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-kube-api-access-nvj2j\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.520229 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.520248 4704 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6ded330b-1278-4aea-8eb7-711847e9a54e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.520256 4704 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.520265 4704 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6ded330b-1278-4aea-8eb7-711847e9a54e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.520274 4704 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:17 crc kubenswrapper[4704]: I0122 16:35:17.520281 4704 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6ded330b-1278-4aea-8eb7-711847e9a54e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 16:35:18 crc kubenswrapper[4704]: I0122 16:35:18.009407 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" event={"ID":"6ded330b-1278-4aea-8eb7-711847e9a54e","Type":"ContainerDied","Data":"509daafecc6b549e74fa1d923aeb1e8a97e389defa18136b2d46a7ddaa49b4e7"} Jan 22 16:35:18 crc kubenswrapper[4704]: I0122 16:35:18.010755 4704 scope.go:117] "RemoveContainer" containerID="a573e292f139e90dedf58db572cd3d04d932569566aace4c266733f4d8c9214f" Jan 22 16:35:18 crc kubenswrapper[4704]: I0122 16:35:18.009508 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xvsbg" Jan 22 16:35:18 crc kubenswrapper[4704]: I0122 16:35:18.048861 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xvsbg"] Jan 22 16:35:18 crc kubenswrapper[4704]: I0122 16:35:18.066636 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xvsbg"] Jan 22 16:35:19 crc kubenswrapper[4704]: I0122 16:35:19.643752 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ded330b-1278-4aea-8eb7-711847e9a54e" path="/var/lib/kubelet/pods/6ded330b-1278-4aea-8eb7-711847e9a54e/volumes" Jan 22 16:36:49 crc kubenswrapper[4704]: I0122 16:36:49.087144 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:36:49 crc kubenswrapper[4704]: I0122 16:36:49.087747 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:37:19 crc kubenswrapper[4704]: I0122 16:37:19.086923 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:37:19 crc kubenswrapper[4704]: I0122 16:37:19.087467 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:37:27 crc kubenswrapper[4704]: I0122 16:37:27.850178 4704 scope.go:117] "RemoveContainer" containerID="33151e6b81a000da898ad64aab691219da4d84bb90d43832928459ffc89410b3" Jan 22 16:37:49 crc kubenswrapper[4704]: I0122 16:37:49.086678 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:37:49 crc kubenswrapper[4704]: I0122 16:37:49.087372 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:37:49 crc kubenswrapper[4704]: I0122 16:37:49.087457 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:37:49 crc kubenswrapper[4704]: I0122 16:37:49.088427 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ac46dd18c98dc20006d974213963bea845ef28d8c751b219281baa2762ee2d0"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:37:49 crc kubenswrapper[4704]: I0122 16:37:49.088529 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://6ac46dd18c98dc20006d974213963bea845ef28d8c751b219281baa2762ee2d0" gracePeriod=600 Jan 22 16:37:50 crc kubenswrapper[4704]: I0122 16:37:50.002510 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="6ac46dd18c98dc20006d974213963bea845ef28d8c751b219281baa2762ee2d0" exitCode=0 Jan 22 16:37:50 crc kubenswrapper[4704]: I0122 16:37:50.003086 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"6ac46dd18c98dc20006d974213963bea845ef28d8c751b219281baa2762ee2d0"} Jan 22 16:37:50 crc kubenswrapper[4704]: I0122 16:37:50.003117 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"c26a9735fc32abd042dcb8a6ea9f8f47b9946bfd125903a6c3f95bae0b5c2e0d"} Jan 22 16:37:50 crc kubenswrapper[4704]: I0122 16:37:50.003142 4704 scope.go:117] "RemoveContainer" containerID="472b8c837b02223b278946b3b749c037d005e52a819017280faf01387d829462" Jan 22 16:38:27 crc kubenswrapper[4704]: I0122 16:38:27.950863 4704 scope.go:117] "RemoveContainer" containerID="5f6543888d2b9afaebc605e617557aa53893dd4dbe461549d3fc00369b8d27a7" Jan 22 16:38:27 crc kubenswrapper[4704]: I0122 16:38:27.981954 4704 scope.go:117] "RemoveContainer" containerID="788170eef95fd0ebe52c19196912857df4b72ed1cf0508496b0128bc67023cc1" Jan 22 16:38:27 crc kubenswrapper[4704]: I0122 16:38:27.998203 4704 scope.go:117] "RemoveContainer" containerID="c1b05bd8cde56421c1f3fc4312495394fbd48b60661a659c408cf9f93e1f8395" Jan 22 16:39:49 crc kubenswrapper[4704]: I0122 16:39:49.087438 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:39:49 crc kubenswrapper[4704]: I0122 16:39:49.088076 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:40:19 crc kubenswrapper[4704]: I0122 16:40:19.086285 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:40:19 crc kubenswrapper[4704]: I0122 16:40:19.087025 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:40:28 crc kubenswrapper[4704]: I0122 16:40:28.090762 4704 scope.go:117] "RemoveContainer" containerID="fd2495473d4853af3b12f03f8b12829e1735122f218c05638d8f960873a90df5" Jan 22 16:40:41 crc kubenswrapper[4704]: I0122 16:40:41.845449 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4"] Jan 22 16:40:41 crc kubenswrapper[4704]: E0122 16:40:41.846239 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ded330b-1278-4aea-8eb7-711847e9a54e" containerName="registry" Jan 22 16:40:41 crc kubenswrapper[4704]: I0122 16:40:41.846255 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ded330b-1278-4aea-8eb7-711847e9a54e" containerName="registry" Jan 22 16:40:41 crc kubenswrapper[4704]: I0122 16:40:41.846368 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ded330b-1278-4aea-8eb7-711847e9a54e" containerName="registry" Jan 22 16:40:41 crc kubenswrapper[4704]: I0122 16:40:41.847231 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:41 crc kubenswrapper[4704]: I0122 16:40:41.848919 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:40:41 crc kubenswrapper[4704]: I0122 16:40:41.856859 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4"] Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.043000 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.043050 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8d5\" (UniqueName: \"kubernetes.io/projected/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-kube-api-access-ld8d5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.043102 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.144344 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.144401 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8d5\" (UniqueName: \"kubernetes.io/projected/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-kube-api-access-ld8d5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.144442 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.144744 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.144835 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.166266 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8d5\" (UniqueName: \"kubernetes.io/projected/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-kube-api-access-ld8d5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.464653 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:42 crc kubenswrapper[4704]: I0122 16:40:42.680334 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4"] Jan 22 16:40:43 crc kubenswrapper[4704]: I0122 16:40:43.190295 4704 generic.go:334] "Generic (PLEG): container finished" podID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerID="996ce3c9799f984e4b7fcc150d42f36fab0fb06517f8c6bbe884acbb960a9da8" exitCode=0 Jan 22 16:40:43 crc kubenswrapper[4704]: I0122 16:40:43.190538 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" event={"ID":"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d","Type":"ContainerDied","Data":"996ce3c9799f984e4b7fcc150d42f36fab0fb06517f8c6bbe884acbb960a9da8"} Jan 22 16:40:43 crc kubenswrapper[4704]: I0122 16:40:43.190607 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" event={"ID":"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d","Type":"ContainerStarted","Data":"cf5207af167d51a21bcebc8d9592e5337fe11905450c5b51c15d0744546b001e"} Jan 22 16:40:43 crc kubenswrapper[4704]: I0122 16:40:43.191672 4704 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:40:45 crc kubenswrapper[4704]: I0122 16:40:45.203510 4704 generic.go:334] "Generic (PLEG): container finished" podID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerID="35db58db0eb380748f11476c3d21101ff9b3b5d29db711d7d4dbbd3a4183e8f1" exitCode=0 Jan 22 16:40:45 crc kubenswrapper[4704]: I0122 16:40:45.203556 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" event={"ID":"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d","Type":"ContainerDied","Data":"35db58db0eb380748f11476c3d21101ff9b3b5d29db711d7d4dbbd3a4183e8f1"} Jan 22 16:40:46 crc kubenswrapper[4704]: I0122 16:40:46.215786 4704 generic.go:334] "Generic (PLEG): container finished" podID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerID="e6a4eb5257ebf6222236607092c57fd357ca3a99a7d0e369302bf6670c851ef5" exitCode=0 Jan 22 16:40:46 crc kubenswrapper[4704]: I0122 16:40:46.215856 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" event={"ID":"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d","Type":"ContainerDied","Data":"e6a4eb5257ebf6222236607092c57fd357ca3a99a7d0e369302bf6670c851ef5"} Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.475086 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.619784 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld8d5\" (UniqueName: \"kubernetes.io/projected/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-kube-api-access-ld8d5\") pod \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.619886 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-bundle\") pod \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.619974 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-util\") pod \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\" (UID: \"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d\") " Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.622106 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-bundle" (OuterVolumeSpecName: "bundle") pod "ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" (UID: "ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.631781 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-kube-api-access-ld8d5" (OuterVolumeSpecName: "kube-api-access-ld8d5") pod "ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" (UID: "ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d"). InnerVolumeSpecName "kube-api-access-ld8d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.634091 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-util" (OuterVolumeSpecName: "util") pod "ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" (UID: "ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.721227 4704 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.721280 4704 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:47 crc kubenswrapper[4704]: I0122 16:40:47.721300 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld8d5\" (UniqueName: \"kubernetes.io/projected/ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d-kube-api-access-ld8d5\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:48 crc kubenswrapper[4704]: I0122 16:40:48.230685 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" event={"ID":"ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d","Type":"ContainerDied","Data":"cf5207af167d51a21bcebc8d9592e5337fe11905450c5b51c15d0744546b001e"} Jan 22 16:40:48 crc kubenswrapper[4704]: I0122 16:40:48.231077 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5207af167d51a21bcebc8d9592e5337fe11905450c5b51c15d0744546b001e" Jan 22 16:40:48 crc kubenswrapper[4704]: I0122 16:40:48.230881 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4" Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.086452 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.086509 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.086554 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.087160 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c26a9735fc32abd042dcb8a6ea9f8f47b9946bfd125903a6c3f95bae0b5c2e0d"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.087217 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://c26a9735fc32abd042dcb8a6ea9f8f47b9946bfd125903a6c3f95bae0b5c2e0d" gracePeriod=600 Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.239664 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="c26a9735fc32abd042dcb8a6ea9f8f47b9946bfd125903a6c3f95bae0b5c2e0d" exitCode=0 Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.239857 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"c26a9735fc32abd042dcb8a6ea9f8f47b9946bfd125903a6c3f95bae0b5c2e0d"} Jan 22 16:40:49 crc kubenswrapper[4704]: I0122 16:40:49.239899 4704 scope.go:117] "RemoveContainer" containerID="6ac46dd18c98dc20006d974213963bea845ef28d8c751b219281baa2762ee2d0" Jan 22 16:40:50 crc kubenswrapper[4704]: I0122 16:40:50.248930 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"c8865a0e2381cbeec53f87553007cf63e787be4f45fe167d5da2b4f406dd127d"} Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.568401 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q8h4x"] Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.569087 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-controller" containerID="cri-o://20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.569188 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="nbdb" containerID="cri-o://36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.569248 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-acl-logging" containerID="cri-o://8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.569261 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-node" containerID="cri-o://34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.569386 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="sbdb" containerID="cri-o://ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.569421 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.569400 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="northd" containerID="cri-o://ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.621489 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" containerID="cri-o://c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" gracePeriod=30 Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.866869 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/3.log" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.874316 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovn-acl-logging/0.log" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.874930 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovn-controller/0.log" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.875341 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929299 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f2ptp"] Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929532 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="northd" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929547 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="northd" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929561 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-node" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929569 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-node" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929581 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929592 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929601 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929611 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929623 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929632 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929644 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929652 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929662 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929670 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929680 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="nbdb" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929687 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="nbdb" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929695 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-acl-logging" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929703 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-acl-logging" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929711 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929719 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929731 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kubecfg-setup" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929738 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kubecfg-setup" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929749 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerName="pull" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929757 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerName="pull" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929767 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerName="util" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929775 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerName="util" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929784 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerName="extract" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929812 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerName="extract" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.929824 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="sbdb" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929833 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="sbdb" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929944 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929957 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929968 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-acl-logging" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929981 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.929991 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="northd" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930007 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="kube-rbac-proxy-node" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930020 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="sbdb" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930035 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d" containerName="extract" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930049 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovn-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930063 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930072 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="nbdb" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930081 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: E0122 16:40:52.930209 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930218 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.930331 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerName="ovnkube-controller" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.932327 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992095 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-env-overrides\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992493 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-kubelet\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992553 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-systemd-units\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992573 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992589 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-netns\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992595 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992631 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992649 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-slash\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992668 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-node-log\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992707 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992711 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-bin\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992754 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-node-log" (OuterVolumeSpecName: "node-log") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992757 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-netd\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992758 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-slash" (OuterVolumeSpecName: "host-slash") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992818 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-var-lib-openvswitch\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992847 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-systemd\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992778 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992763 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992843 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992889 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovn-node-metrics-cert\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992916 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-log-socket\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992967 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.992992 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-script-lib\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993009 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-ovn\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993038 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-etc-openvswitch\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993058 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkqnk\" (UniqueName: \"kubernetes.io/projected/fce29525-000a-4c91-8765-67c0c3f1ae7e-kube-api-access-hkqnk\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993079 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-openvswitch\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993101 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-config\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993128 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-ovn-kubernetes\") pod \"fce29525-000a-4c91-8765-67c0c3f1ae7e\" (UID: \"fce29525-000a-4c91-8765-67c0c3f1ae7e\") " Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993317 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-kubelet\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993345 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovnkube-config\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993372 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-env-overrides\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993397 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfbd5\" (UniqueName: \"kubernetes.io/projected/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-kube-api-access-wfbd5\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993421 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-node-log\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993450 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-systemd-units\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993473 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-slash\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993495 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993515 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-run-ovn-kubernetes\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993548 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-etc-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993571 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovnkube-script-lib\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993609 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-var-lib-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993632 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-ovn\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993665 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-log-socket\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993692 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-systemd\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993729 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993753 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-cni-netd\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993809 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-cni-bin\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993837 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovn-node-metrics-cert\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993860 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-run-netns\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993899 4704 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993912 4704 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993925 4704 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993936 4704 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993947 4704 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993960 4704 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993971 4704 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993982 4704 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993993 4704 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993631 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-log-socket" (OuterVolumeSpecName: "log-socket") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993651 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993964 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993982 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.993997 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.994429 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.994458 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.994487 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.999090 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:40:52 crc kubenswrapper[4704]: I0122 16:40:52.999333 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce29525-000a-4c91-8765-67c0c3f1ae7e-kube-api-access-hkqnk" (OuterVolumeSpecName: "kube-api-access-hkqnk") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "kube-api-access-hkqnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.017526 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fce29525-000a-4c91-8765-67c0c3f1ae7e" (UID: "fce29525-000a-4c91-8765-67c0c3f1ae7e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095582 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-cni-bin\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095626 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovn-node-metrics-cert\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095645 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-run-netns\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095664 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-kubelet\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095681 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovnkube-config\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095701 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-env-overrides\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095704 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-cni-bin\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095716 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfbd5\" (UniqueName: \"kubernetes.io/projected/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-kube-api-access-wfbd5\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095722 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-run-netns\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.095810 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-kubelet\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096056 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-node-log\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096138 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-node-log\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096179 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-systemd-units\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096204 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-slash\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096252 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-systemd-units\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096297 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096275 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096338 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-run-ovn-kubernetes\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096246 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-slash\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096391 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovnkube-config\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096398 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-etc-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096421 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-run-ovn-kubernetes\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096437 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovnkube-script-lib\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096429 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-etc-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096485 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-env-overrides\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096496 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-var-lib-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096554 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-ovn\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096521 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-var-lib-openvswitch\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096612 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-ovn\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096619 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-log-socket\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096655 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-systemd\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096727 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-run-systemd\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096735 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-log-socket\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096761 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096822 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-cni-netd\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096834 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096856 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-host-cni-netd\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096937 4704 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096960 4704 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.096978 4704 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097000 4704 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097017 4704 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097033 4704 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097049 4704 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097066 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkqnk\" (UniqueName: \"kubernetes.io/projected/fce29525-000a-4c91-8765-67c0c3f1ae7e-kube-api-access-hkqnk\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097083 4704 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097099 4704 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fce29525-000a-4c91-8765-67c0c3f1ae7e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097115 4704 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fce29525-000a-4c91-8765-67c0c3f1ae7e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.097184 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovnkube-script-lib\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.098628 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-ovn-node-metrics-cert\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.117684 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfbd5\" (UniqueName: \"kubernetes.io/projected/1b37b4b3-d13f-47e7-8a75-2cf467ecc917-kube-api-access-wfbd5\") pod \"ovnkube-node-f2ptp\" (UID: \"1b37b4b3-d13f-47e7-8a75-2cf467ecc917\") " pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.245684 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:40:53 crc kubenswrapper[4704]: W0122 16:40:53.262473 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b37b4b3_d13f_47e7_8a75_2cf467ecc917.slice/crio-b69c4752f14fa7dcfc07ecf257f9b26c2ff3ef8708afcb6addfb2aca2e2c3703 WatchSource:0}: Error finding container b69c4752f14fa7dcfc07ecf257f9b26c2ff3ef8708afcb6addfb2aca2e2c3703: Status 404 returned error can't find the container with id b69c4752f14fa7dcfc07ecf257f9b26c2ff3ef8708afcb6addfb2aca2e2c3703 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.271918 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovnkube-controller/3.log" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.274208 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovn-acl-logging/0.log" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.274717 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q8h4x_fce29525-000a-4c91-8765-67c0c3f1ae7e/ovn-controller/0.log" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275087 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" exitCode=0 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275143 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" exitCode=0 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275155 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" exitCode=0 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275166 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" exitCode=0 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275170 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275174 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" exitCode=0 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275347 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" exitCode=0 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275393 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" exitCode=143 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275412 4704 generic.go:334] "Generic (PLEG): container finished" podID="fce29525-000a-4c91-8765-67c0c3f1ae7e" containerID="20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" exitCode=143 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275171 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275498 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275538 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275554 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275568 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275582 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275595 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275629 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275636 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275639 4704 scope.go:117] "RemoveContainer" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275643 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275716 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275730 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275737 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275743 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275749 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275760 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275810 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275820 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275827 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275834 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275841 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275869 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275877 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275883 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275890 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275898 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275908 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275919 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275947 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275955 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275962 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275968 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275974 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275984 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275991 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.275998 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276024 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276034 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q8h4x" event={"ID":"fce29525-000a-4c91-8765-67c0c3f1ae7e","Type":"ContainerDied","Data":"da90047f784a6ba3378431364b6575c6f9218b8f136c68799c145c134d49021d"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276047 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276055 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276061 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276068 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276074 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276080 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276107 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276113 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276119 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.276125 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.278233 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/2.log" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.278912 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/1.log" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.278959 4704 generic.go:334] "Generic (PLEG): container finished" podID="9357b7a7-d902-4f7e-97b9-b0a7871ec95e" containerID="6c4b1bdc0188a97a87e635a079219bea7a676bb95436b887abb9fc74e596b72d" exitCode=2 Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.279023 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerDied","Data":"6c4b1bdc0188a97a87e635a079219bea7a676bb95436b887abb9fc74e596b72d"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.279050 4704 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.279596 4704 scope.go:117] "RemoveContainer" containerID="6c4b1bdc0188a97a87e635a079219bea7a676bb95436b887abb9fc74e596b72d" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.284292 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"b69c4752f14fa7dcfc07ecf257f9b26c2ff3ef8708afcb6addfb2aca2e2c3703"} Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.324030 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.347786 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q8h4x"] Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.351276 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q8h4x"] Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.376825 4704 scope.go:117] "RemoveContainer" containerID="ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.390142 4704 scope.go:117] "RemoveContainer" containerID="36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.406015 4704 scope.go:117] "RemoveContainer" containerID="ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.425452 4704 scope.go:117] "RemoveContainer" containerID="106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.484981 4704 scope.go:117] "RemoveContainer" containerID="34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.499372 4704 scope.go:117] "RemoveContainer" containerID="8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.512063 4704 scope.go:117] "RemoveContainer" containerID="20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.526990 4704 scope.go:117] "RemoveContainer" containerID="9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.538480 4704 scope.go:117] "RemoveContainer" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.538991 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": container with ID starting with c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54 not found: ID does not exist" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.539051 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} err="failed to get container status \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": rpc error: code = NotFound desc = could not find container \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": container with ID starting with c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.539092 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.542240 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": container with ID starting with 15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57 not found: ID does not exist" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.542285 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} err="failed to get container status \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": rpc error: code = NotFound desc = could not find container \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": container with ID starting with 15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.542314 4704 scope.go:117] "RemoveContainer" containerID="ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.542749 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": container with ID starting with ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516 not found: ID does not exist" containerID="ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.542786 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} err="failed to get container status \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": rpc error: code = NotFound desc = could not find container \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": container with ID starting with ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.543062 4704 scope.go:117] "RemoveContainer" containerID="36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.543404 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": container with ID starting with 36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b not found: ID does not exist" containerID="36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.543463 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} err="failed to get container status \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": rpc error: code = NotFound desc = could not find container \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": container with ID starting with 36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.543481 4704 scope.go:117] "RemoveContainer" containerID="ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.543752 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": container with ID starting with ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747 not found: ID does not exist" containerID="ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.543778 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} err="failed to get container status \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": rpc error: code = NotFound desc = could not find container \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": container with ID starting with ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.543813 4704 scope.go:117] "RemoveContainer" containerID="106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.544090 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": container with ID starting with 106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd not found: ID does not exist" containerID="106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544116 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} err="failed to get container status \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": rpc error: code = NotFound desc = could not find container \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": container with ID starting with 106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544134 4704 scope.go:117] "RemoveContainer" containerID="34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.544368 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": container with ID starting with 34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a not found: ID does not exist" containerID="34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544396 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} err="failed to get container status \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": rpc error: code = NotFound desc = could not find container \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": container with ID starting with 34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544412 4704 scope.go:117] "RemoveContainer" containerID="8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.544601 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": container with ID starting with 8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a not found: ID does not exist" containerID="8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544621 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} err="failed to get container status \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": rpc error: code = NotFound desc = could not find container \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": container with ID starting with 8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544637 4704 scope.go:117] "RemoveContainer" containerID="20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.544909 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": container with ID starting with 20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de not found: ID does not exist" containerID="20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544931 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} err="failed to get container status \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": rpc error: code = NotFound desc = could not find container \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": container with ID starting with 20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.544947 4704 scope.go:117] "RemoveContainer" containerID="9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62" Jan 22 16:40:53 crc kubenswrapper[4704]: E0122 16:40:53.545180 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": container with ID starting with 9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62 not found: ID does not exist" containerID="9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.545207 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} err="failed to get container status \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": rpc error: code = NotFound desc = could not find container \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": container with ID starting with 9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.545225 4704 scope.go:117] "RemoveContainer" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.545459 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} err="failed to get container status \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": rpc error: code = NotFound desc = could not find container \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": container with ID starting with c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.545479 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.545734 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} err="failed to get container status \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": rpc error: code = NotFound desc = could not find container \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": container with ID starting with 15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.545760 4704 scope.go:117] "RemoveContainer" containerID="ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546005 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} err="failed to get container status \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": rpc error: code = NotFound desc = could not find container \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": container with ID starting with ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546037 4704 scope.go:117] "RemoveContainer" containerID="36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546305 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} err="failed to get container status \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": rpc error: code = NotFound desc = could not find container \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": container with ID starting with 36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546331 4704 scope.go:117] "RemoveContainer" containerID="ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546524 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} err="failed to get container status \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": rpc error: code = NotFound desc = could not find container \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": container with ID starting with ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546546 4704 scope.go:117] "RemoveContainer" containerID="106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546743 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} err="failed to get container status \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": rpc error: code = NotFound desc = could not find container \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": container with ID starting with 106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.546760 4704 scope.go:117] "RemoveContainer" containerID="34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.553214 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} err="failed to get container status \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": rpc error: code = NotFound desc = could not find container \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": container with ID starting with 34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.553273 4704 scope.go:117] "RemoveContainer" containerID="8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.553555 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} err="failed to get container status \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": rpc error: code = NotFound desc = could not find container \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": container with ID starting with 8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.553592 4704 scope.go:117] "RemoveContainer" containerID="20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.553829 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} err="failed to get container status \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": rpc error: code = NotFound desc = could not find container \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": container with ID starting with 20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.553847 4704 scope.go:117] "RemoveContainer" containerID="9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.554191 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} err="failed to get container status \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": rpc error: code = NotFound desc = could not find container \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": container with ID starting with 9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.554227 4704 scope.go:117] "RemoveContainer" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.554498 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} err="failed to get container status \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": rpc error: code = NotFound desc = could not find container \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": container with ID starting with c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.554515 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.554809 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} err="failed to get container status \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": rpc error: code = NotFound desc = could not find container \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": container with ID starting with 15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.554832 4704 scope.go:117] "RemoveContainer" containerID="ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.555330 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} err="failed to get container status \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": rpc error: code = NotFound desc = could not find container \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": container with ID starting with ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.555346 4704 scope.go:117] "RemoveContainer" containerID="36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.555606 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} err="failed to get container status \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": rpc error: code = NotFound desc = could not find container \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": container with ID starting with 36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.555620 4704 scope.go:117] "RemoveContainer" containerID="ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.555899 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} err="failed to get container status \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": rpc error: code = NotFound desc = could not find container \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": container with ID starting with ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.555912 4704 scope.go:117] "RemoveContainer" containerID="106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.556183 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} err="failed to get container status \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": rpc error: code = NotFound desc = could not find container \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": container with ID starting with 106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.556214 4704 scope.go:117] "RemoveContainer" containerID="34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.556542 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} err="failed to get container status \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": rpc error: code = NotFound desc = could not find container \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": container with ID starting with 34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.556556 4704 scope.go:117] "RemoveContainer" containerID="8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.556938 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} err="failed to get container status \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": rpc error: code = NotFound desc = could not find container \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": container with ID starting with 8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.556968 4704 scope.go:117] "RemoveContainer" containerID="20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.557193 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} err="failed to get container status \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": rpc error: code = NotFound desc = could not find container \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": container with ID starting with 20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.557215 4704 scope.go:117] "RemoveContainer" containerID="9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.557573 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} err="failed to get container status \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": rpc error: code = NotFound desc = could not find container \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": container with ID starting with 9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.557587 4704 scope.go:117] "RemoveContainer" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.557971 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} err="failed to get container status \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": rpc error: code = NotFound desc = could not find container \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": container with ID starting with c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.557985 4704 scope.go:117] "RemoveContainer" containerID="15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558206 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57"} err="failed to get container status \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": rpc error: code = NotFound desc = could not find container \"15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57\": container with ID starting with 15f0dfccb0cc8a87881affa31e74fc7dd484842fa94d1d55e1b8afa5c05d3f57 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558220 4704 scope.go:117] "RemoveContainer" containerID="ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558479 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516"} err="failed to get container status \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": rpc error: code = NotFound desc = could not find container \"ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516\": container with ID starting with ef857259f5a071f6f3d86a3e2274973a7b33f55ce8d3e6ea82566ea84e564516 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558492 4704 scope.go:117] "RemoveContainer" containerID="36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558715 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b"} err="failed to get container status \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": rpc error: code = NotFound desc = could not find container \"36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b\": container with ID starting with 36ccaf94ddb84b318eaf57551204e96c010a5da36634c3eff2a7dac339f5c76b not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558728 4704 scope.go:117] "RemoveContainer" containerID="ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558962 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747"} err="failed to get container status \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": rpc error: code = NotFound desc = could not find container \"ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747\": container with ID starting with ec08a0481b0fceff3a80bc0679bf4fa320a6d736ce8b8e0bdf72eff63af8c747 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.558977 4704 scope.go:117] "RemoveContainer" containerID="106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559200 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd"} err="failed to get container status \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": rpc error: code = NotFound desc = could not find container \"106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd\": container with ID starting with 106cc1232d52c1a17763cb91e4e9c279ca0e669e21b3338256d2432ba03f55cd not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559220 4704 scope.go:117] "RemoveContainer" containerID="34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559455 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a"} err="failed to get container status \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": rpc error: code = NotFound desc = could not find container \"34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a\": container with ID starting with 34a91518303cdc655b11587819d85861a1b53b6eab280c7a4d0c730814ab230a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559471 4704 scope.go:117] "RemoveContainer" containerID="8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559693 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a"} err="failed to get container status \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": rpc error: code = NotFound desc = could not find container \"8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a\": container with ID starting with 8f87b6ce58026f092832398e284798fd74d815200fc4cc1f4f870e2924418a7a not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559706 4704 scope.go:117] "RemoveContainer" containerID="20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559969 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de"} err="failed to get container status \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": rpc error: code = NotFound desc = could not find container \"20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de\": container with ID starting with 20d4314091336303912f999846199ad2ad832ca013f29d1d49880c18835967de not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.559983 4704 scope.go:117] "RemoveContainer" containerID="9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.560211 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62"} err="failed to get container status \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": rpc error: code = NotFound desc = could not find container \"9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62\": container with ID starting with 9aa807b3a02d4e02d98b909e16bd7695faef1735a0a4a28f3ff58ea9b1b31a62 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.560224 4704 scope.go:117] "RemoveContainer" containerID="c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.560444 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54"} err="failed to get container status \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": rpc error: code = NotFound desc = could not find container \"c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54\": container with ID starting with c5a4aff3e7b3f6cb2207cee84f6ae514c3a48549674797147d00996623daae54 not found: ID does not exist" Jan 22 16:40:53 crc kubenswrapper[4704]: I0122 16:40:53.639304 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce29525-000a-4c91-8765-67c0c3f1ae7e" path="/var/lib/kubelet/pods/fce29525-000a-4c91-8765-67c0c3f1ae7e/volumes" Jan 22 16:40:54 crc kubenswrapper[4704]: I0122 16:40:54.290515 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/2.log" Jan 22 16:40:54 crc kubenswrapper[4704]: I0122 16:40:54.291741 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/1.log" Jan 22 16:40:54 crc kubenswrapper[4704]: I0122 16:40:54.291815 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-77bsn" event={"ID":"9357b7a7-d902-4f7e-97b9-b0a7871ec95e","Type":"ContainerStarted","Data":"9e2699fabbd1dc6fc22e6836a4c8f24d6711f1893b4b68bfd770fd36e3c18fdc"} Jan 22 16:40:54 crc kubenswrapper[4704]: I0122 16:40:54.293701 4704 generic.go:334] "Generic (PLEG): container finished" podID="1b37b4b3-d13f-47e7-8a75-2cf467ecc917" containerID="1fed8796e19b71e960724ef35ca123851ef057f8d1619811a3c9f4f80772f72a" exitCode=0 Jan 22 16:40:54 crc kubenswrapper[4704]: I0122 16:40:54.293743 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerDied","Data":"1fed8796e19b71e960724ef35ca123851ef057f8d1619811a3c9f4f80772f72a"} Jan 22 16:40:55 crc kubenswrapper[4704]: I0122 16:40:55.305041 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"894fd9a19145ce2f031c4be010abd38057336d7f18b5538568b2bf190c552910"} Jan 22 16:40:55 crc kubenswrapper[4704]: I0122 16:40:55.305590 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"f3fab421f60656e2ae78c93d9442f407dacb94ef693597f4eeb22e8752f35446"} Jan 22 16:40:55 crc kubenswrapper[4704]: I0122 16:40:55.305601 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"a7f0d739f861fb8e498e11935d6b066aaed00b6738b7fc53368e1d42741bf1fe"} Jan 22 16:40:55 crc kubenswrapper[4704]: I0122 16:40:55.305611 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"d343e046907b4709e6c7c261ec8d49e016baf2e4931c72721027d1f0f14e8d86"} Jan 22 16:40:55 crc kubenswrapper[4704]: I0122 16:40:55.305620 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"31fb259e376db333832e3afa648988b42a93beaa7647ff438f3e42bb54e9350e"} Jan 22 16:40:55 crc kubenswrapper[4704]: I0122 16:40:55.305644 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"ff9618bc512fa449498d8ac90542a4b31a38f1649cc51f8a0761a691c8011018"} Jan 22 16:40:57 crc kubenswrapper[4704]: I0122 16:40:57.327330 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"07367c4890cc7cc9e854df2e4bd3f1dda5eeaf9280d2d7fcfdc494f1ea36210f"} Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.719190 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd"] Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.720500 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.722588 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.723140 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-bsqxs" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.723259 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.774037 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qcp\" (UniqueName: \"kubernetes.io/projected/8e5411a6-6909-463f-9794-35459abc62ff-kube-api-access-v9qcp\") pod \"obo-prometheus-operator-68bc856cb9-nptfd\" (UID: \"8e5411a6-6909-463f-9794-35459abc62ff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.835505 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc"] Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.836480 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.838906 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.839415 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-jj9jp" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.845104 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn"] Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.846085 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.874913 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8aeac14-9541-4d77-a63a-087807303ca7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn\" (UID: \"c8aeac14-9541-4d77-a63a-087807303ca7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.874973 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/581b3ed3-6843-4e85-8187-2718699e8964-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc\" (UID: \"581b3ed3-6843-4e85-8187-2718699e8964\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.875009 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8aeac14-9541-4d77-a63a-087807303ca7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn\" (UID: \"c8aeac14-9541-4d77-a63a-087807303ca7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.875052 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/581b3ed3-6843-4e85-8187-2718699e8964-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc\" (UID: \"581b3ed3-6843-4e85-8187-2718699e8964\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.875077 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9qcp\" (UniqueName: \"kubernetes.io/projected/8e5411a6-6909-463f-9794-35459abc62ff-kube-api-access-v9qcp\") pod \"obo-prometheus-operator-68bc856cb9-nptfd\" (UID: \"8e5411a6-6909-463f-9794-35459abc62ff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.901560 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9qcp\" (UniqueName: \"kubernetes.io/projected/8e5411a6-6909-463f-9794-35459abc62ff-kube-api-access-v9qcp\") pod \"obo-prometheus-operator-68bc856cb9-nptfd\" (UID: \"8e5411a6-6909-463f-9794-35459abc62ff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.975891 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8aeac14-9541-4d77-a63a-087807303ca7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn\" (UID: \"c8aeac14-9541-4d77-a63a-087807303ca7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.975947 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/581b3ed3-6843-4e85-8187-2718699e8964-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc\" (UID: \"581b3ed3-6843-4e85-8187-2718699e8964\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.975972 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8aeac14-9541-4d77-a63a-087807303ca7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn\" (UID: \"c8aeac14-9541-4d77-a63a-087807303ca7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.976003 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/581b3ed3-6843-4e85-8187-2718699e8964-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc\" (UID: \"581b3ed3-6843-4e85-8187-2718699e8964\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.979144 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/581b3ed3-6843-4e85-8187-2718699e8964-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc\" (UID: \"581b3ed3-6843-4e85-8187-2718699e8964\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.979228 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/581b3ed3-6843-4e85-8187-2718699e8964-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc\" (UID: \"581b3ed3-6843-4e85-8187-2718699e8964\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.980029 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c8aeac14-9541-4d77-a63a-087807303ca7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn\" (UID: \"c8aeac14-9541-4d77-a63a-087807303ca7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:40:59 crc kubenswrapper[4704]: I0122 16:40:59.990329 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c8aeac14-9541-4d77-a63a-087807303ca7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn\" (UID: \"c8aeac14-9541-4d77-a63a-087807303ca7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.031874 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4tbm7"] Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.032492 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.034322 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.035527 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-d9mth" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.038895 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.060777 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(a5f5806238d24a89ae77ec8785662720a19d835c7062ecc35ad19d05438c57e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.060943 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(a5f5806238d24a89ae77ec8785662720a19d835c7062ecc35ad19d05438c57e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.061024 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(a5f5806238d24a89ae77ec8785662720a19d835c7062ecc35ad19d05438c57e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.061124 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators(8e5411a6-6909-463f-9794-35459abc62ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators(8e5411a6-6909-463f-9794-35459abc62ff)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(a5f5806238d24a89ae77ec8785662720a19d835c7062ecc35ad19d05438c57e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" podUID="8e5411a6-6909-463f-9794-35459abc62ff" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.076964 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43a41bb-98a7-48f7-8a29-1dc807c5ad5e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4tbm7\" (UID: \"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e\") " pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.077044 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9t69\" (UniqueName: \"kubernetes.io/projected/e43a41bb-98a7-48f7-8a29-1dc807c5ad5e-kube-api-access-q9t69\") pod \"observability-operator-59bdc8b94-4tbm7\" (UID: \"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e\") " pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.142011 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4j4ln"] Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.142687 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:00 crc kubenswrapper[4704]: W0122 16:41:00.144614 4704 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-rxdxm": failed to list *v1.Secret: secrets "perses-operator-dockercfg-rxdxm" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.144662 4704 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-rxdxm\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"perses-operator-dockercfg-rxdxm\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.156233 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.168152 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.180076 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9t69\" (UniqueName: \"kubernetes.io/projected/e43a41bb-98a7-48f7-8a29-1dc807c5ad5e-kube-api-access-q9t69\") pod \"observability-operator-59bdc8b94-4tbm7\" (UID: \"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e\") " pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.180210 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/be3ad292-b6cf-42bc-8eee-2768c60702be-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4j4ln\" (UID: \"be3ad292-b6cf-42bc-8eee-2768c60702be\") " pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.180400 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2s9r\" (UniqueName: \"kubernetes.io/projected/be3ad292-b6cf-42bc-8eee-2768c60702be-kube-api-access-v2s9r\") pod \"perses-operator-5bf474d74f-4j4ln\" (UID: \"be3ad292-b6cf-42bc-8eee-2768c60702be\") " pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.180465 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43a41bb-98a7-48f7-8a29-1dc807c5ad5e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4tbm7\" (UID: \"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e\") " pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.197551 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43a41bb-98a7-48f7-8a29-1dc807c5ad5e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4tbm7\" (UID: \"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e\") " pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.200254 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(1b7903f83b78429b96785234d0549bbaa76e657fa15f34b3efc31b5b6a437324): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.200691 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(1b7903f83b78429b96785234d0549bbaa76e657fa15f34b3efc31b5b6a437324): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.200774 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(1b7903f83b78429b96785234d0549bbaa76e657fa15f34b3efc31b5b6a437324): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.201087 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators(581b3ed3-6843-4e85-8187-2718699e8964)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators(581b3ed3-6843-4e85-8187-2718699e8964)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(1b7903f83b78429b96785234d0549bbaa76e657fa15f34b3efc31b5b6a437324): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" podUID="581b3ed3-6843-4e85-8187-2718699e8964" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.202517 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(c4e4ae35c8843a6f84d40ebd7bb8764da5b1f11f910535c031911945f13d1aa6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.202607 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(c4e4ae35c8843a6f84d40ebd7bb8764da5b1f11f910535c031911945f13d1aa6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.202629 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(c4e4ae35c8843a6f84d40ebd7bb8764da5b1f11f910535c031911945f13d1aa6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.202687 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators(c8aeac14-9541-4d77-a63a-087807303ca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators(c8aeac14-9541-4d77-a63a-087807303ca7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(c4e4ae35c8843a6f84d40ebd7bb8764da5b1f11f910535c031911945f13d1aa6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" podUID="c8aeac14-9541-4d77-a63a-087807303ca7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.218839 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9t69\" (UniqueName: \"kubernetes.io/projected/e43a41bb-98a7-48f7-8a29-1dc807c5ad5e-kube-api-access-q9t69\") pod \"observability-operator-59bdc8b94-4tbm7\" (UID: \"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e\") " pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.282537 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s9r\" (UniqueName: \"kubernetes.io/projected/be3ad292-b6cf-42bc-8eee-2768c60702be-kube-api-access-v2s9r\") pod \"perses-operator-5bf474d74f-4j4ln\" (UID: \"be3ad292-b6cf-42bc-8eee-2768c60702be\") " pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.282628 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/be3ad292-b6cf-42bc-8eee-2768c60702be-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4j4ln\" (UID: \"be3ad292-b6cf-42bc-8eee-2768c60702be\") " pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.283428 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/be3ad292-b6cf-42bc-8eee-2768c60702be-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4j4ln\" (UID: \"be3ad292-b6cf-42bc-8eee-2768c60702be\") " pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.298997 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s9r\" (UniqueName: \"kubernetes.io/projected/be3ad292-b6cf-42bc-8eee-2768c60702be-kube-api-access-v2s9r\") pod \"perses-operator-5bf474d74f-4j4ln\" (UID: \"be3ad292-b6cf-42bc-8eee-2768c60702be\") " pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.344345 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" event={"ID":"1b37b4b3-d13f-47e7-8a75-2cf467ecc917","Type":"ContainerStarted","Data":"1df05c052ad97cc690873ccd3af4137837c52b7c3b98ec780dfd4b0e9fcebee5"} Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.344728 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.344846 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.344882 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.349316 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.387776 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" podStartSLOduration=8.387762417 podStartE2EDuration="8.387762417s" podCreationTimestamp="2026-01-22 16:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:41:00.387620473 +0000 UTC m=+753.032167173" watchObservedRunningTime="2026-01-22 16:41:00.387762417 +0000 UTC m=+753.032309107" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.429663 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:41:00 crc kubenswrapper[4704]: I0122 16:41:00.431384 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.432836 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e789fa082ecf066c3be7bcb2df6c0acc6d265eff303eb8426e9c43e0942129fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.432954 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e789fa082ecf066c3be7bcb2df6c0acc6d265eff303eb8426e9c43e0942129fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.432981 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e789fa082ecf066c3be7bcb2df6c0acc6d265eff303eb8426e9c43e0942129fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:00 crc kubenswrapper[4704]: E0122 16:41:00.433055 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4tbm7_openshift-operators(e43a41bb-98a7-48f7-8a29-1dc807c5ad5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4tbm7_openshift-operators(e43a41bb-98a7-48f7-8a29-1dc807c5ad5e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e789fa082ecf066c3be7bcb2df6c0acc6d265eff303eb8426e9c43e0942129fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" podUID="e43a41bb-98a7-48f7-8a29-1dc807c5ad5e" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.309785 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd"] Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.309902 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.310250 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.333112 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(d48a8d38565891201c57e7fe2038c52953a91e58af0aae462e27ef9ef84315ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.333172 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(d48a8d38565891201c57e7fe2038c52953a91e58af0aae462e27ef9ef84315ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.333190 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(d48a8d38565891201c57e7fe2038c52953a91e58af0aae462e27ef9ef84315ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.333237 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators(8e5411a6-6909-463f-9794-35459abc62ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators(8e5411a6-6909-463f-9794-35459abc62ff)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-nptfd_openshift-operators_8e5411a6-6909-463f-9794-35459abc62ff_0(d48a8d38565891201c57e7fe2038c52953a91e58af0aae462e27ef9ef84315ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" podUID="8e5411a6-6909-463f-9794-35459abc62ff" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.347278 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc"] Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.347366 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.347854 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.365099 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn"] Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.365231 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.365697 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.368276 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4tbm7"] Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.368375 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.368734 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.377959 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(010f2d995916547095e3b77a20e50132e8a6700b01939c9927ea10a26403fadf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.378027 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(010f2d995916547095e3b77a20e50132e8a6700b01939c9927ea10a26403fadf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.378048 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(010f2d995916547095e3b77a20e50132e8a6700b01939c9927ea10a26403fadf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.378090 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators(581b3ed3-6843-4e85-8187-2718699e8964)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators(581b3ed3-6843-4e85-8187-2718699e8964)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_openshift-operators_581b3ed3-6843-4e85-8187-2718699e8964_0(010f2d995916547095e3b77a20e50132e8a6700b01939c9927ea10a26403fadf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" podUID="581b3ed3-6843-4e85-8187-2718699e8964" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.391208 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4j4ln"] Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.449502 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(8aba79b65fbb5384dea7478f16c011d0b176b0d26633a9f124f1d3bf2dbc6162): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.449565 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(8aba79b65fbb5384dea7478f16c011d0b176b0d26633a9f124f1d3bf2dbc6162): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.449587 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(8aba79b65fbb5384dea7478f16c011d0b176b0d26633a9f124f1d3bf2dbc6162): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.449638 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators(c8aeac14-9541-4d77-a63a-087807303ca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators(c8aeac14-9541-4d77-a63a-087807303ca7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_openshift-operators_c8aeac14-9541-4d77-a63a-087807303ca7_0(8aba79b65fbb5384dea7478f16c011d0b176b0d26633a9f124f1d3bf2dbc6162): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" podUID="c8aeac14-9541-4d77-a63a-087807303ca7" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.453757 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e577b568a079d3d2a17cf58db39350e687dd74ede02ea009405dd5fa53bf8cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.453845 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e577b568a079d3d2a17cf58db39350e687dd74ede02ea009405dd5fa53bf8cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.453866 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e577b568a079d3d2a17cf58db39350e687dd74ede02ea009405dd5fa53bf8cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.453909 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-4tbm7_openshift-operators(e43a41bb-98a7-48f7-8a29-1dc807c5ad5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-4tbm7_openshift-operators(e43a41bb-98a7-48f7-8a29-1dc807c5ad5e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-4tbm7_openshift-operators_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e_0(e577b568a079d3d2a17cf58db39350e687dd74ede02ea009405dd5fa53bf8cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" podUID="e43a41bb-98a7-48f7-8a29-1dc807c5ad5e" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.456707 4704 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.456769 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.483565 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(ad8f8337f92eb946ecd1591b7cb7304e5aa940df5f00d59e8959bbe2d50159ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.483850 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(ad8f8337f92eb946ecd1591b7cb7304e5aa940df5f00d59e8959bbe2d50159ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.483887 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(ad8f8337f92eb946ecd1591b7cb7304e5aa940df5f00d59e8959bbe2d50159ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:01 crc kubenswrapper[4704]: E0122 16:41:01.483937 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-4j4ln_openshift-operators(be3ad292-b6cf-42bc-8eee-2768c60702be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-4j4ln_openshift-operators(be3ad292-b6cf-42bc-8eee-2768c60702be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(ad8f8337f92eb946ecd1591b7cb7304e5aa940df5f00d59e8959bbe2d50159ae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" podUID="be3ad292-b6cf-42bc-8eee-2768c60702be" Jan 22 16:41:01 crc kubenswrapper[4704]: I0122 16:41:01.593553 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-rxdxm" Jan 22 16:41:02 crc kubenswrapper[4704]: I0122 16:41:02.353141 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:02 crc kubenswrapper[4704]: I0122 16:41:02.355056 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:02 crc kubenswrapper[4704]: E0122 16:41:02.378603 4704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(94d6eff3412ed822155798da1ec24c7ae386e7573b65069f22e1ef0b5915080a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 16:41:02 crc kubenswrapper[4704]: E0122 16:41:02.378671 4704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(94d6eff3412ed822155798da1ec24c7ae386e7573b65069f22e1ef0b5915080a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:02 crc kubenswrapper[4704]: E0122 16:41:02.378693 4704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(94d6eff3412ed822155798da1ec24c7ae386e7573b65069f22e1ef0b5915080a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:02 crc kubenswrapper[4704]: E0122 16:41:02.378735 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-4j4ln_openshift-operators(be3ad292-b6cf-42bc-8eee-2768c60702be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-4j4ln_openshift-operators(be3ad292-b6cf-42bc-8eee-2768c60702be)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-4j4ln_openshift-operators_be3ad292-b6cf-42bc-8eee-2768c60702be_0(94d6eff3412ed822155798da1ec24c7ae386e7573b65069f22e1ef0b5915080a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" podUID="be3ad292-b6cf-42bc-8eee-2768c60702be" Jan 22 16:41:06 crc kubenswrapper[4704]: I0122 16:41:06.697497 4704 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 16:41:11 crc kubenswrapper[4704]: I0122 16:41:11.633072 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:11 crc kubenswrapper[4704]: I0122 16:41:11.633701 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" Jan 22 16:41:11 crc kubenswrapper[4704]: I0122 16:41:11.847005 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd"] Jan 22 16:41:11 crc kubenswrapper[4704]: W0122 16:41:11.855978 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e5411a6_6909_463f_9794_35459abc62ff.slice/crio-9326de7f48338dbb134be90e444da10d11fd38bdfe0f66b1d624519932aa9a3e WatchSource:0}: Error finding container 9326de7f48338dbb134be90e444da10d11fd38bdfe0f66b1d624519932aa9a3e: Status 404 returned error can't find the container with id 9326de7f48338dbb134be90e444da10d11fd38bdfe0f66b1d624519932aa9a3e Jan 22 16:41:12 crc kubenswrapper[4704]: I0122 16:41:12.403668 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" event={"ID":"8e5411a6-6909-463f-9794-35459abc62ff","Type":"ContainerStarted","Data":"9326de7f48338dbb134be90e444da10d11fd38bdfe0f66b1d624519932aa9a3e"} Jan 22 16:41:14 crc kubenswrapper[4704]: I0122 16:41:14.633441 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:14 crc kubenswrapper[4704]: I0122 16:41:14.634025 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:15 crc kubenswrapper[4704]: I0122 16:41:15.639235 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:15 crc kubenswrapper[4704]: I0122 16:41:15.639929 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:15 crc kubenswrapper[4704]: I0122 16:41:15.640187 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" Jan 22 16:41:15 crc kubenswrapper[4704]: I0122 16:41:15.640482 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" Jan 22 16:41:16 crc kubenswrapper[4704]: I0122 16:41:16.635092 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:16 crc kubenswrapper[4704]: I0122 16:41:16.635556 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:17 crc kubenswrapper[4704]: I0122 16:41:17.282102 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn"] Jan 22 16:41:17 crc kubenswrapper[4704]: W0122 16:41:17.285846 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8aeac14_9541_4d77_a63a_087807303ca7.slice/crio-88d2dd076ee1ef409c13188decd23308f9804d9af0ef0774680007be3b03f7ad WatchSource:0}: Error finding container 88d2dd076ee1ef409c13188decd23308f9804d9af0ef0774680007be3b03f7ad: Status 404 returned error can't find the container with id 88d2dd076ee1ef409c13188decd23308f9804d9af0ef0774680007be3b03f7ad Jan 22 16:41:17 crc kubenswrapper[4704]: I0122 16:41:17.286333 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4tbm7"] Jan 22 16:41:17 crc kubenswrapper[4704]: W0122 16:41:17.287295 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43a41bb_98a7_48f7_8a29_1dc807c5ad5e.slice/crio-adc1e1583b0cd4342ca745135db956e4631ee97948c6196c77f44d3c2de0bfe6 WatchSource:0}: Error finding container adc1e1583b0cd4342ca745135db956e4631ee97948c6196c77f44d3c2de0bfe6: Status 404 returned error can't find the container with id adc1e1583b0cd4342ca745135db956e4631ee97948c6196c77f44d3c2de0bfe6 Jan 22 16:41:17 crc kubenswrapper[4704]: I0122 16:41:17.361730 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc"] Jan 22 16:41:17 crc kubenswrapper[4704]: W0122 16:41:17.363026 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod581b3ed3_6843_4e85_8187_2718699e8964.slice/crio-4d86836a9af3e90187236f526f073594df3c4c68d6f38a55fcee968af56123c0 WatchSource:0}: Error finding container 4d86836a9af3e90187236f526f073594df3c4c68d6f38a55fcee968af56123c0: Status 404 returned error can't find the container with id 4d86836a9af3e90187236f526f073594df3c4c68d6f38a55fcee968af56123c0 Jan 22 16:41:17 crc kubenswrapper[4704]: I0122 16:41:17.429509 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4j4ln"] Jan 22 16:41:17 crc kubenswrapper[4704]: I0122 16:41:17.437904 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" event={"ID":"581b3ed3-6843-4e85-8187-2718699e8964","Type":"ContainerStarted","Data":"4d86836a9af3e90187236f526f073594df3c4c68d6f38a55fcee968af56123c0"} Jan 22 16:41:17 crc kubenswrapper[4704]: I0122 16:41:17.439051 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" event={"ID":"c8aeac14-9541-4d77-a63a-087807303ca7","Type":"ContainerStarted","Data":"88d2dd076ee1ef409c13188decd23308f9804d9af0ef0774680007be3b03f7ad"} Jan 22 16:41:17 crc kubenswrapper[4704]: I0122 16:41:17.440326 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" event={"ID":"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e","Type":"ContainerStarted","Data":"adc1e1583b0cd4342ca745135db956e4631ee97948c6196c77f44d3c2de0bfe6"} Jan 22 16:41:17 crc kubenswrapper[4704]: W0122 16:41:17.440519 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe3ad292_b6cf_42bc_8eee_2768c60702be.slice/crio-936ac7b30f940482dbaa9d560596a37b8b9a2505f7c66ce087a77791f1324c6f WatchSource:0}: Error finding container 936ac7b30f940482dbaa9d560596a37b8b9a2505f7c66ce087a77791f1324c6f: Status 404 returned error can't find the container with id 936ac7b30f940482dbaa9d560596a37b8b9a2505f7c66ce087a77791f1324c6f Jan 22 16:41:18 crc kubenswrapper[4704]: I0122 16:41:18.446184 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" event={"ID":"8e5411a6-6909-463f-9794-35459abc62ff","Type":"ContainerStarted","Data":"9ce5671c027d2e771937407c86fcc61b6030a65add49fad0c7c17cfcd1c1c4cc"} Jan 22 16:41:18 crc kubenswrapper[4704]: I0122 16:41:18.451006 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" event={"ID":"be3ad292-b6cf-42bc-8eee-2768c60702be","Type":"ContainerStarted","Data":"936ac7b30f940482dbaa9d560596a37b8b9a2505f7c66ce087a77791f1324c6f"} Jan 22 16:41:18 crc kubenswrapper[4704]: I0122 16:41:18.467134 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nptfd" podStartSLOduration=14.136744172 podStartE2EDuration="19.467117111s" podCreationTimestamp="2026-01-22 16:40:59 +0000 UTC" firstStartedPulling="2026-01-22 16:41:11.858581112 +0000 UTC m=+764.503127802" lastFinishedPulling="2026-01-22 16:41:17.188954041 +0000 UTC m=+769.833500741" observedRunningTime="2026-01-22 16:41:18.466920045 +0000 UTC m=+771.111466745" watchObservedRunningTime="2026-01-22 16:41:18.467117111 +0000 UTC m=+771.111663811" Jan 22 16:41:20 crc kubenswrapper[4704]: I0122 16:41:20.476088 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" event={"ID":"581b3ed3-6843-4e85-8187-2718699e8964","Type":"ContainerStarted","Data":"8d683f48b1c676caa0166d403a04b955df671b2d2923e4576666e64fff64cfa1"} Jan 22 16:41:20 crc kubenswrapper[4704]: I0122 16:41:20.479294 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" event={"ID":"c8aeac14-9541-4d77-a63a-087807303ca7","Type":"ContainerStarted","Data":"4d3ece3070a955f0b354cc8f9599c3b1a17c125a5129f44ee1f9524ee6e3de81"} Jan 22 16:41:20 crc kubenswrapper[4704]: I0122 16:41:20.487580 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" event={"ID":"be3ad292-b6cf-42bc-8eee-2768c60702be","Type":"ContainerStarted","Data":"53b267f5cb1f8bf4b907e53785166938aa2b2a5cd2b7b7a95ecb65921e608459"} Jan 22 16:41:20 crc kubenswrapper[4704]: I0122 16:41:20.487830 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:20 crc kubenswrapper[4704]: I0122 16:41:20.507232 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc" podStartSLOduration=18.732330903 podStartE2EDuration="21.507215488s" podCreationTimestamp="2026-01-22 16:40:59 +0000 UTC" firstStartedPulling="2026-01-22 16:41:17.365474914 +0000 UTC m=+770.010021614" lastFinishedPulling="2026-01-22 16:41:20.140359499 +0000 UTC m=+772.784906199" observedRunningTime="2026-01-22 16:41:20.504867751 +0000 UTC m=+773.149414451" watchObservedRunningTime="2026-01-22 16:41:20.507215488 +0000 UTC m=+773.151762188" Jan 22 16:41:20 crc kubenswrapper[4704]: I0122 16:41:20.534529 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" podStartSLOduration=17.836725528 podStartE2EDuration="20.534497753s" podCreationTimestamp="2026-01-22 16:41:00 +0000 UTC" firstStartedPulling="2026-01-22 16:41:17.442699767 +0000 UTC m=+770.087246467" lastFinishedPulling="2026-01-22 16:41:20.140471992 +0000 UTC m=+772.785018692" observedRunningTime="2026-01-22 16:41:20.530821248 +0000 UTC m=+773.175367948" watchObservedRunningTime="2026-01-22 16:41:20.534497753 +0000 UTC m=+773.179044453" Jan 22 16:41:20 crc kubenswrapper[4704]: I0122 16:41:20.549954 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn" podStartSLOduration=18.693289184 podStartE2EDuration="21.549935971s" podCreationTimestamp="2026-01-22 16:40:59 +0000 UTC" firstStartedPulling="2026-01-22 16:41:17.288999312 +0000 UTC m=+769.933546012" lastFinishedPulling="2026-01-22 16:41:20.145646099 +0000 UTC m=+772.790192799" observedRunningTime="2026-01-22 16:41:20.548870471 +0000 UTC m=+773.193417171" watchObservedRunningTime="2026-01-22 16:41:20.549935971 +0000 UTC m=+773.194482671" Jan 22 16:41:23 crc kubenswrapper[4704]: I0122 16:41:23.273048 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f2ptp" Jan 22 16:41:25 crc kubenswrapper[4704]: I0122 16:41:25.529063 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" event={"ID":"e43a41bb-98a7-48f7-8a29-1dc807c5ad5e","Type":"ContainerStarted","Data":"455efb0b96c66fe4725f8d515865774e8ca5794d6419e20a6bab93274b4777dc"} Jan 22 16:41:25 crc kubenswrapper[4704]: I0122 16:41:25.530811 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:25 crc kubenswrapper[4704]: I0122 16:41:25.554537 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" podStartSLOduration=18.273499103 podStartE2EDuration="25.5545211s" podCreationTimestamp="2026-01-22 16:41:00 +0000 UTC" firstStartedPulling="2026-01-22 16:41:17.289996701 +0000 UTC m=+769.934543401" lastFinishedPulling="2026-01-22 16:41:24.571018698 +0000 UTC m=+777.215565398" observedRunningTime="2026-01-22 16:41:25.55276849 +0000 UTC m=+778.197315210" watchObservedRunningTime="2026-01-22 16:41:25.5545211 +0000 UTC m=+778.199067800" Jan 22 16:41:25 crc kubenswrapper[4704]: I0122 16:41:25.567765 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4tbm7" Jan 22 16:41:28 crc kubenswrapper[4704]: I0122 16:41:28.130772 4704 scope.go:117] "RemoveContainer" containerID="6c4a050b09adf6789fda5280fa00427c53beafe632ddbeb871ea1f7942418a35" Jan 22 16:41:28 crc kubenswrapper[4704]: I0122 16:41:28.549329 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-77bsn_9357b7a7-d902-4f7e-97b9-b0a7871ec95e/kube-multus/2.log" Jan 22 16:41:31 crc kubenswrapper[4704]: I0122 16:41:31.459523 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-4j4ln" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.223283 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz"] Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.227253 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.230634 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.242244 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz"] Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.388618 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.388704 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-kube-api-access-5t8k5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.388763 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.489759 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.489854 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.489887 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-kube-api-access-5t8k5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.490559 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.490783 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.512803 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-kube-api-access-5t8k5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.548099 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:36 crc kubenswrapper[4704]: I0122 16:41:36.744620 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz"] Jan 22 16:41:36 crc kubenswrapper[4704]: W0122 16:41:36.748582 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac8d18a_c3db_4598_aa53_dd69c190e6a3.slice/crio-e40368baa957ae3e2de93767bf1ba45a11caaaf266da96e744148aceb86d4b66 WatchSource:0}: Error finding container e40368baa957ae3e2de93767bf1ba45a11caaaf266da96e744148aceb86d4b66: Status 404 returned error can't find the container with id e40368baa957ae3e2de93767bf1ba45a11caaaf266da96e744148aceb86d4b66 Jan 22 16:41:37 crc kubenswrapper[4704]: I0122 16:41:37.602501 4704 generic.go:334] "Generic (PLEG): container finished" podID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerID="de7043b0fbc0c977aafa1669880a1dbe778d0979e9f184fbabd3aed9004e2de7" exitCode=0 Jan 22 16:41:37 crc kubenswrapper[4704]: I0122 16:41:37.602580 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" event={"ID":"4ac8d18a-c3db-4598-aa53-dd69c190e6a3","Type":"ContainerDied","Data":"de7043b0fbc0c977aafa1669880a1dbe778d0979e9f184fbabd3aed9004e2de7"} Jan 22 16:41:37 crc kubenswrapper[4704]: I0122 16:41:37.602834 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" event={"ID":"4ac8d18a-c3db-4598-aa53-dd69c190e6a3","Type":"ContainerStarted","Data":"e40368baa957ae3e2de93767bf1ba45a11caaaf266da96e744148aceb86d4b66"} Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.574856 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-77f4m"] Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.577151 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.584497 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77f4m"] Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.617334 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535a024a-4218-4bb1-86e5-f8b63f1b10c4-utilities\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.617423 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khhpq\" (UniqueName: \"kubernetes.io/projected/535a024a-4218-4bb1-86e5-f8b63f1b10c4-kube-api-access-khhpq\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.617450 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535a024a-4218-4bb1-86e5-f8b63f1b10c4-catalog-content\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.718262 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535a024a-4218-4bb1-86e5-f8b63f1b10c4-utilities\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.718323 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khhpq\" (UniqueName: \"kubernetes.io/projected/535a024a-4218-4bb1-86e5-f8b63f1b10c4-kube-api-access-khhpq\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.718341 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535a024a-4218-4bb1-86e5-f8b63f1b10c4-catalog-content\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.718777 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535a024a-4218-4bb1-86e5-f8b63f1b10c4-catalog-content\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.718910 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535a024a-4218-4bb1-86e5-f8b63f1b10c4-utilities\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.744114 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khhpq\" (UniqueName: \"kubernetes.io/projected/535a024a-4218-4bb1-86e5-f8b63f1b10c4-kube-api-access-khhpq\") pod \"redhat-operators-77f4m\" (UID: \"535a024a-4218-4bb1-86e5-f8b63f1b10c4\") " pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:38 crc kubenswrapper[4704]: I0122 16:41:38.895172 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:39 crc kubenswrapper[4704]: I0122 16:41:39.108950 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77f4m"] Jan 22 16:41:39 crc kubenswrapper[4704]: I0122 16:41:39.616029 4704 generic.go:334] "Generic (PLEG): container finished" podID="535a024a-4218-4bb1-86e5-f8b63f1b10c4" containerID="6a5964719f6e6d4049be64530c549709185182f0093bfe78aa0fb76c21a43013" exitCode=0 Jan 22 16:41:39 crc kubenswrapper[4704]: I0122 16:41:39.616138 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77f4m" event={"ID":"535a024a-4218-4bb1-86e5-f8b63f1b10c4","Type":"ContainerDied","Data":"6a5964719f6e6d4049be64530c549709185182f0093bfe78aa0fb76c21a43013"} Jan 22 16:41:39 crc kubenswrapper[4704]: I0122 16:41:39.616340 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77f4m" event={"ID":"535a024a-4218-4bb1-86e5-f8b63f1b10c4","Type":"ContainerStarted","Data":"462f632e49cafaac419bd65f8086fede5fe9cd51925e079f764dbe8fa458aa1a"} Jan 22 16:41:39 crc kubenswrapper[4704]: I0122 16:41:39.618623 4704 generic.go:334] "Generic (PLEG): container finished" podID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerID="c89ed303b8f8593e1260b5402ad3abcb42c7af3318960f0b145f6cd0f55eb526" exitCode=0 Jan 22 16:41:39 crc kubenswrapper[4704]: I0122 16:41:39.618725 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" event={"ID":"4ac8d18a-c3db-4598-aa53-dd69c190e6a3","Type":"ContainerDied","Data":"c89ed303b8f8593e1260b5402ad3abcb42c7af3318960f0b145f6cd0f55eb526"} Jan 22 16:41:40 crc kubenswrapper[4704]: I0122 16:41:40.627038 4704 generic.go:334] "Generic (PLEG): container finished" podID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerID="be64bb01bacc8cade3a25469f55c133a2389a4430f4417c8f6838079268d6412" exitCode=0 Jan 22 16:41:40 crc kubenswrapper[4704]: I0122 16:41:40.627103 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" event={"ID":"4ac8d18a-c3db-4598-aa53-dd69c190e6a3","Type":"ContainerDied","Data":"be64bb01bacc8cade3a25469f55c133a2389a4430f4417c8f6838079268d6412"} Jan 22 16:41:41 crc kubenswrapper[4704]: I0122 16:41:41.860154 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:41 crc kubenswrapper[4704]: I0122 16:41:41.961372 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-util\") pod \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " Jan 22 16:41:41 crc kubenswrapper[4704]: I0122 16:41:41.961446 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-kube-api-access-5t8k5\") pod \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " Jan 22 16:41:41 crc kubenswrapper[4704]: I0122 16:41:41.961478 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-bundle\") pod \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\" (UID: \"4ac8d18a-c3db-4598-aa53-dd69c190e6a3\") " Jan 22 16:41:41 crc kubenswrapper[4704]: I0122 16:41:41.962044 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-bundle" (OuterVolumeSpecName: "bundle") pod "4ac8d18a-c3db-4598-aa53-dd69c190e6a3" (UID: "4ac8d18a-c3db-4598-aa53-dd69c190e6a3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:41:41 crc kubenswrapper[4704]: I0122 16:41:41.967332 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-kube-api-access-5t8k5" (OuterVolumeSpecName: "kube-api-access-5t8k5") pod "4ac8d18a-c3db-4598-aa53-dd69c190e6a3" (UID: "4ac8d18a-c3db-4598-aa53-dd69c190e6a3"). InnerVolumeSpecName "kube-api-access-5t8k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:41:41 crc kubenswrapper[4704]: I0122 16:41:41.975108 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-util" (OuterVolumeSpecName: "util") pod "4ac8d18a-c3db-4598-aa53-dd69c190e6a3" (UID: "4ac8d18a-c3db-4598-aa53-dd69c190e6a3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:41:42 crc kubenswrapper[4704]: I0122 16:41:42.063244 4704 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:42 crc kubenswrapper[4704]: I0122 16:41:42.063328 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t8k5\" (UniqueName: \"kubernetes.io/projected/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-kube-api-access-5t8k5\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:42 crc kubenswrapper[4704]: I0122 16:41:42.063349 4704 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ac8d18a-c3db-4598-aa53-dd69c190e6a3-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:41:42 crc kubenswrapper[4704]: I0122 16:41:42.643613 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" event={"ID":"4ac8d18a-c3db-4598-aa53-dd69c190e6a3","Type":"ContainerDied","Data":"e40368baa957ae3e2de93767bf1ba45a11caaaf266da96e744148aceb86d4b66"} Jan 22 16:41:42 crc kubenswrapper[4704]: I0122 16:41:42.643649 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e40368baa957ae3e2de93767bf1ba45a11caaaf266da96e744148aceb86d4b66" Jan 22 16:41:42 crc kubenswrapper[4704]: I0122 16:41:42.643712 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.607411 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z9sc2"] Jan 22 16:41:44 crc kubenswrapper[4704]: E0122 16:41:44.607641 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerName="util" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.607653 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerName="util" Jan 22 16:41:44 crc kubenswrapper[4704]: E0122 16:41:44.607676 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerName="pull" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.607683 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerName="pull" Jan 22 16:41:44 crc kubenswrapper[4704]: E0122 16:41:44.607692 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerName="extract" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.607698 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerName="extract" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.607837 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac8d18a-c3db-4598-aa53-dd69c190e6a3" containerName="extract" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.608242 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.611326 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.611538 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.611701 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tsrm7" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.626227 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z9sc2"] Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.795091 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9lx\" (UniqueName: \"kubernetes.io/projected/8a05cf90-d057-49da-a06d-40a9343b611b-kube-api-access-sr9lx\") pod \"nmstate-operator-646758c888-z9sc2\" (UID: \"8a05cf90-d057-49da-a06d-40a9343b611b\") " pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.897011 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9lx\" (UniqueName: \"kubernetes.io/projected/8a05cf90-d057-49da-a06d-40a9343b611b-kube-api-access-sr9lx\") pod \"nmstate-operator-646758c888-z9sc2\" (UID: \"8a05cf90-d057-49da-a06d-40a9343b611b\") " pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.918103 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9lx\" (UniqueName: \"kubernetes.io/projected/8a05cf90-d057-49da-a06d-40a9343b611b-kube-api-access-sr9lx\") pod \"nmstate-operator-646758c888-z9sc2\" (UID: \"8a05cf90-d057-49da-a06d-40a9343b611b\") " pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" Jan 22 16:41:44 crc kubenswrapper[4704]: I0122 16:41:44.935738 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" Jan 22 16:41:48 crc kubenswrapper[4704]: I0122 16:41:48.092772 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z9sc2"] Jan 22 16:41:48 crc kubenswrapper[4704]: I0122 16:41:48.677640 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" event={"ID":"8a05cf90-d057-49da-a06d-40a9343b611b","Type":"ContainerStarted","Data":"c1a1e3c39f3d396ab593db5d382a56a1924d04b85b7fc8fa9761861b29f24158"} Jan 22 16:41:48 crc kubenswrapper[4704]: I0122 16:41:48.680108 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77f4m" event={"ID":"535a024a-4218-4bb1-86e5-f8b63f1b10c4","Type":"ContainerStarted","Data":"1c2c9ed8b06bcb95ad50c3eec4441ec1966a6f0d4d5d7a50b91af829094d732e"} Jan 22 16:41:49 crc kubenswrapper[4704]: I0122 16:41:49.687357 4704 generic.go:334] "Generic (PLEG): container finished" podID="535a024a-4218-4bb1-86e5-f8b63f1b10c4" containerID="1c2c9ed8b06bcb95ad50c3eec4441ec1966a6f0d4d5d7a50b91af829094d732e" exitCode=0 Jan 22 16:41:49 crc kubenswrapper[4704]: I0122 16:41:49.687408 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77f4m" event={"ID":"535a024a-4218-4bb1-86e5-f8b63f1b10c4","Type":"ContainerDied","Data":"1c2c9ed8b06bcb95ad50c3eec4441ec1966a6f0d4d5d7a50b91af829094d732e"} Jan 22 16:41:50 crc kubenswrapper[4704]: I0122 16:41:50.694107 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77f4m" event={"ID":"535a024a-4218-4bb1-86e5-f8b63f1b10c4","Type":"ContainerStarted","Data":"bc2b332755e210e4b7812a7b5b8ce9732c59e5d33a6ef46efca139af0447f1e7"} Jan 22 16:41:50 crc kubenswrapper[4704]: I0122 16:41:50.711026 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-77f4m" podStartSLOduration=2.26027027 podStartE2EDuration="12.711007411s" podCreationTimestamp="2026-01-22 16:41:38 +0000 UTC" firstStartedPulling="2026-01-22 16:41:39.617772455 +0000 UTC m=+792.262319155" lastFinishedPulling="2026-01-22 16:41:50.068509596 +0000 UTC m=+802.713056296" observedRunningTime="2026-01-22 16:41:50.707720214 +0000 UTC m=+803.352266944" watchObservedRunningTime="2026-01-22 16:41:50.711007411 +0000 UTC m=+803.355554111" Jan 22 16:41:51 crc kubenswrapper[4704]: I0122 16:41:51.970717 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9gbm6"] Jan 22 16:41:51 crc kubenswrapper[4704]: I0122 16:41:51.972088 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:51 crc kubenswrapper[4704]: I0122 16:41:51.981423 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gbm6"] Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.087276 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-utilities\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.087393 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvf9\" (UniqueName: \"kubernetes.io/projected/c5ae5827-d602-4f89-9e1f-96068605ebee-kube-api-access-lzvf9\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.087508 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-catalog-content\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.188332 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvf9\" (UniqueName: \"kubernetes.io/projected/c5ae5827-d602-4f89-9e1f-96068605ebee-kube-api-access-lzvf9\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.188410 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-catalog-content\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.188452 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-utilities\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.188993 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-utilities\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.189015 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-catalog-content\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.211690 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvf9\" (UniqueName: \"kubernetes.io/projected/c5ae5827-d602-4f89-9e1f-96068605ebee-kube-api-access-lzvf9\") pod \"redhat-marketplace-9gbm6\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.288288 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.488221 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gbm6"] Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.708100 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gbm6" event={"ID":"c5ae5827-d602-4f89-9e1f-96068605ebee","Type":"ContainerStarted","Data":"f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075"} Jan 22 16:41:52 crc kubenswrapper[4704]: I0122 16:41:52.708376 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gbm6" event={"ID":"c5ae5827-d602-4f89-9e1f-96068605ebee","Type":"ContainerStarted","Data":"7aaede76ba86ada7be8e552a29980d74cac3a3081590ff4aff1856487bdaa340"} Jan 22 16:41:53 crc kubenswrapper[4704]: I0122 16:41:53.716765 4704 generic.go:334] "Generic (PLEG): container finished" podID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerID="f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075" exitCode=0 Jan 22 16:41:53 crc kubenswrapper[4704]: I0122 16:41:53.716835 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gbm6" event={"ID":"c5ae5827-d602-4f89-9e1f-96068605ebee","Type":"ContainerDied","Data":"f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075"} Jan 22 16:41:58 crc kubenswrapper[4704]: I0122 16:41:58.895565 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:58 crc kubenswrapper[4704]: I0122 16:41:58.895941 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:58 crc kubenswrapper[4704]: I0122 16:41:58.992866 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:59 crc kubenswrapper[4704]: I0122 16:41:59.797724 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-77f4m" Jan 22 16:41:59 crc kubenswrapper[4704]: I0122 16:41:59.933512 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77f4m"] Jan 22 16:41:59 crc kubenswrapper[4704]: I0122 16:41:59.979177 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnbwc"] Jan 22 16:41:59 crc kubenswrapper[4704]: I0122 16:41:59.979389 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dnbwc" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="registry-server" containerID="cri-o://64af9810cbd9e0475238f64f4ddc09adc77e4eb376204a36f7c5997b106cb79c" gracePeriod=2 Jan 22 16:42:01 crc kubenswrapper[4704]: I0122 16:42:01.763262 4704 generic.go:334] "Generic (PLEG): container finished" podID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerID="64af9810cbd9e0475238f64f4ddc09adc77e4eb376204a36f7c5997b106cb79c" exitCode=0 Jan 22 16:42:01 crc kubenswrapper[4704]: I0122 16:42:01.764153 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnbwc" event={"ID":"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da","Type":"ContainerDied","Data":"64af9810cbd9e0475238f64f4ddc09adc77e4eb376204a36f7c5997b106cb79c"} Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.518200 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.656000 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-catalog-content\") pod \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.656068 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z5ln\" (UniqueName: \"kubernetes.io/projected/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-kube-api-access-7z5ln\") pod \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.656220 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-utilities\") pod \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\" (UID: \"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da\") " Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.657256 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-utilities" (OuterVolumeSpecName: "utilities") pod "4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" (UID: "4a29fc77-1872-44d7-b2a2-9c0f3a13f1da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.662983 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-kube-api-access-7z5ln" (OuterVolumeSpecName: "kube-api-access-7z5ln") pod "4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" (UID: "4a29fc77-1872-44d7-b2a2-9c0f3a13f1da"). InnerVolumeSpecName "kube-api-access-7z5ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.757238 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z5ln\" (UniqueName: \"kubernetes.io/projected/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-kube-api-access-7z5ln\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.757279 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.774323 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" (UID: "4a29fc77-1872-44d7-b2a2-9c0f3a13f1da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.777760 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnbwc" event={"ID":"4a29fc77-1872-44d7-b2a2-9c0f3a13f1da","Type":"ContainerDied","Data":"edcc92696e7ee121f3c2056fdb5fc081d792e5036130bfe912089d6d513ed2e4"} Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.777822 4704 scope.go:117] "RemoveContainer" containerID="64af9810cbd9e0475238f64f4ddc09adc77e4eb376204a36f7c5997b106cb79c" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.777959 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnbwc" Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.807983 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnbwc"] Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.812455 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dnbwc"] Jan 22 16:42:02 crc kubenswrapper[4704]: I0122 16:42:02.859196 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:03 crc kubenswrapper[4704]: I0122 16:42:03.640592 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" path="/var/lib/kubelet/pods/4a29fc77-1872-44d7-b2a2-9c0f3a13f1da/volumes" Jan 22 16:42:04 crc kubenswrapper[4704]: I0122 16:42:04.256444 4704 scope.go:117] "RemoveContainer" containerID="c16a69e4bddcc9e7bb9fdeb6fad9692fca118c997367232c4e3ad680c4010c2b" Jan 22 16:42:05 crc kubenswrapper[4704]: I0122 16:42:05.138004 4704 scope.go:117] "RemoveContainer" containerID="28215890624c53097fa338097109e6ab52a3e91d3b34edc725ea5dd28eff3762" Jan 22 16:42:06 crc kubenswrapper[4704]: I0122 16:42:06.808376 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" event={"ID":"8a05cf90-d057-49da-a06d-40a9343b611b","Type":"ContainerStarted","Data":"ffb4bd0e98c23c897d31f22be26ba133000fcf288262b2bbeadbfc0068bd4ae0"} Jan 22 16:42:06 crc kubenswrapper[4704]: I0122 16:42:06.810956 4704 generic.go:334] "Generic (PLEG): container finished" podID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerID="cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa" exitCode=0 Jan 22 16:42:06 crc kubenswrapper[4704]: I0122 16:42:06.810990 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gbm6" event={"ID":"c5ae5827-d602-4f89-9e1f-96068605ebee","Type":"ContainerDied","Data":"cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa"} Jan 22 16:42:06 crc kubenswrapper[4704]: I0122 16:42:06.840651 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-z9sc2" podStartSLOduration=5.049610552 podStartE2EDuration="22.840635592s" podCreationTimestamp="2026-01-22 16:41:44 +0000 UTC" firstStartedPulling="2026-01-22 16:41:48.106833863 +0000 UTC m=+800.751380563" lastFinishedPulling="2026-01-22 16:42:05.897858903 +0000 UTC m=+818.542405603" observedRunningTime="2026-01-22 16:42:06.835692807 +0000 UTC m=+819.480239527" watchObservedRunningTime="2026-01-22 16:42:06.840635592 +0000 UTC m=+819.485182292" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.819078 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gbm6" event={"ID":"c5ae5827-d602-4f89-9e1f-96068605ebee","Type":"ContainerStarted","Data":"193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c"} Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.840725 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9gbm6" podStartSLOduration=3.337019708 podStartE2EDuration="16.840703011s" podCreationTimestamp="2026-01-22 16:41:51 +0000 UTC" firstStartedPulling="2026-01-22 16:41:53.718348525 +0000 UTC m=+806.362895245" lastFinishedPulling="2026-01-22 16:42:07.222031828 +0000 UTC m=+819.866578548" observedRunningTime="2026-01-22 16:42:07.838515766 +0000 UTC m=+820.483062486" watchObservedRunningTime="2026-01-22 16:42:07.840703011 +0000 UTC m=+820.485249721" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.874928 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hd6sv"] Jan 22 16:42:07 crc kubenswrapper[4704]: E0122 16:42:07.875178 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="extract-content" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.875197 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="extract-content" Jan 22 16:42:07 crc kubenswrapper[4704]: E0122 16:42:07.875207 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="registry-server" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.875216 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="registry-server" Jan 22 16:42:07 crc kubenswrapper[4704]: E0122 16:42:07.875241 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="extract-utilities" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.875250 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="extract-utilities" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.875383 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a29fc77-1872-44d7-b2a2-9c0f3a13f1da" containerName="registry-server" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.876114 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.877734 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tdpld" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.885611 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hd6sv"] Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.902922 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-28zp7"] Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.903566 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.940189 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7"] Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.941417 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.942031 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-dbus-socket\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.942125 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-ovs-socket\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.942167 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-nmstate-lock\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.942199 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8n4\" (UniqueName: \"kubernetes.io/projected/b0c7e587-8794-4b01-ae39-83cb29c3c4c6-kube-api-access-hf8n4\") pod \"nmstate-metrics-54757c584b-hd6sv\" (UID: \"b0c7e587-8794-4b01-ae39-83cb29c3c4c6\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.942418 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tkw\" (UniqueName: \"kubernetes.io/projected/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-kube-api-access-f4tkw\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.943535 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 16:42:07 crc kubenswrapper[4704]: I0122 16:42:07.968492 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7"] Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.024344 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj"] Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.025212 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.029060 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.029306 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dd8sf" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.029392 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.034024 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj"] Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.044782 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9trqc\" (UniqueName: \"kubernetes.io/projected/acc2a3ba-8a71-460c-979b-704ea09aa117-kube-api-access-9trqc\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.044910 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-ovs-socket\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.044936 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88d23917-02a3-4eba-94a8-50b5e3aa06a4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4rtd7\" (UID: \"88d23917-02a3-4eba-94a8-50b5e3aa06a4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.044968 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-nmstate-lock\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.044986 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf8n4\" (UniqueName: \"kubernetes.io/projected/b0c7e587-8794-4b01-ae39-83cb29c3c4c6-kube-api-access-hf8n4\") pod \"nmstate-metrics-54757c584b-hd6sv\" (UID: \"b0c7e587-8794-4b01-ae39-83cb29c3c4c6\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.045008 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tkw\" (UniqueName: \"kubernetes.io/projected/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-kube-api-access-f4tkw\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.045031 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/acc2a3ba-8a71-460c-979b-704ea09aa117-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.045058 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/acc2a3ba-8a71-460c-979b-704ea09aa117-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.045105 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-dbus-socket\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.045131 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crmjl\" (UniqueName: \"kubernetes.io/projected/88d23917-02a3-4eba-94a8-50b5e3aa06a4-kube-api-access-crmjl\") pod \"nmstate-webhook-8474b5b9d8-4rtd7\" (UID: \"88d23917-02a3-4eba-94a8-50b5e3aa06a4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.045244 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-ovs-socket\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.045284 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-nmstate-lock\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.046082 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-dbus-socket\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.075130 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tkw\" (UniqueName: \"kubernetes.io/projected/07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc-kube-api-access-f4tkw\") pod \"nmstate-handler-28zp7\" (UID: \"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc\") " pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.075820 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf8n4\" (UniqueName: \"kubernetes.io/projected/b0c7e587-8794-4b01-ae39-83cb29c3c4c6-kube-api-access-hf8n4\") pod \"nmstate-metrics-54757c584b-hd6sv\" (UID: \"b0c7e587-8794-4b01-ae39-83cb29c3c4c6\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.146005 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88d23917-02a3-4eba-94a8-50b5e3aa06a4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4rtd7\" (UID: \"88d23917-02a3-4eba-94a8-50b5e3aa06a4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.146086 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/acc2a3ba-8a71-460c-979b-704ea09aa117-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.146116 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/acc2a3ba-8a71-460c-979b-704ea09aa117-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.146163 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crmjl\" (UniqueName: \"kubernetes.io/projected/88d23917-02a3-4eba-94a8-50b5e3aa06a4-kube-api-access-crmjl\") pod \"nmstate-webhook-8474b5b9d8-4rtd7\" (UID: \"88d23917-02a3-4eba-94a8-50b5e3aa06a4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.146190 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9trqc\" (UniqueName: \"kubernetes.io/projected/acc2a3ba-8a71-460c-979b-704ea09aa117-kube-api-access-9trqc\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: E0122 16:42:08.146621 4704 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 22 16:42:08 crc kubenswrapper[4704]: E0122 16:42:08.146720 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/acc2a3ba-8a71-460c-979b-704ea09aa117-plugin-serving-cert podName:acc2a3ba-8a71-460c-979b-704ea09aa117 nodeName:}" failed. No retries permitted until 2026-01-22 16:42:08.646696763 +0000 UTC m=+821.291243513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/acc2a3ba-8a71-460c-979b-704ea09aa117-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-6qvtj" (UID: "acc2a3ba-8a71-460c-979b-704ea09aa117") : secret "plugin-serving-cert" not found Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.147678 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/acc2a3ba-8a71-460c-979b-704ea09aa117-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.151307 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88d23917-02a3-4eba-94a8-50b5e3aa06a4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4rtd7\" (UID: \"88d23917-02a3-4eba-94a8-50b5e3aa06a4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.172107 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9trqc\" (UniqueName: \"kubernetes.io/projected/acc2a3ba-8a71-460c-979b-704ea09aa117-kube-api-access-9trqc\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.175091 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crmjl\" (UniqueName: \"kubernetes.io/projected/88d23917-02a3-4eba-94a8-50b5e3aa06a4-kube-api-access-crmjl\") pod \"nmstate-webhook-8474b5b9d8-4rtd7\" (UID: \"88d23917-02a3-4eba-94a8-50b5e3aa06a4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.202563 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76479b6979-x64kd"] Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.203377 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.215683 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76479b6979-x64kd"] Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.244525 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.246543 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8tk\" (UniqueName: \"kubernetes.io/projected/31ee5638-ee25-460d-ac71-44e5a9aafc9b-kube-api-access-6x8tk\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.246575 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-oauth-serving-cert\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.246608 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-config\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.246633 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-serving-cert\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.246654 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-trusted-ca-bundle\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.246686 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-service-ca\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.246717 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-oauth-config\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.264148 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.273309 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:08 crc kubenswrapper[4704]: W0122 16:42:08.324643 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f6ae7b_7e7f_493c_bf6a_d3ff4233d9bc.slice/crio-c723cbbe93592f5a5b3e9dd9e172a9c6784f999fdaadff2c0f45ef86f8b50b48 WatchSource:0}: Error finding container c723cbbe93592f5a5b3e9dd9e172a9c6784f999fdaadff2c0f45ef86f8b50b48: Status 404 returned error can't find the container with id c723cbbe93592f5a5b3e9dd9e172a9c6784f999fdaadff2c0f45ef86f8b50b48 Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.348974 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-service-ca\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.349047 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-oauth-config\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.349081 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8tk\" (UniqueName: \"kubernetes.io/projected/31ee5638-ee25-460d-ac71-44e5a9aafc9b-kube-api-access-6x8tk\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.349102 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-oauth-serving-cert\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.349153 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-config\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.349186 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-serving-cert\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.349220 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-trusted-ca-bundle\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.357672 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-serving-cert\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.359342 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-service-ca\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.360740 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-oauth-serving-cert\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.361056 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-trusted-ca-bundle\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.364196 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-config\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.365168 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-oauth-config\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.369728 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8tk\" (UniqueName: \"kubernetes.io/projected/31ee5638-ee25-460d-ac71-44e5a9aafc9b-kube-api-access-6x8tk\") pod \"console-76479b6979-x64kd\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.480128 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hd6sv"] Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.517592 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.551020 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7"] Jan 22 16:42:08 crc kubenswrapper[4704]: W0122 16:42:08.558558 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88d23917_02a3_4eba_94a8_50b5e3aa06a4.slice/crio-1c6e5d6bd8bbc7d33ab7cdc8d0b6c30a7c46a59f5ec6a635b472af9caf93c571 WatchSource:0}: Error finding container 1c6e5d6bd8bbc7d33ab7cdc8d0b6c30a7c46a59f5ec6a635b472af9caf93c571: Status 404 returned error can't find the container with id 1c6e5d6bd8bbc7d33ab7cdc8d0b6c30a7c46a59f5ec6a635b472af9caf93c571 Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.652651 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/acc2a3ba-8a71-460c-979b-704ea09aa117-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.657599 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/acc2a3ba-8a71-460c-979b-704ea09aa117-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6qvtj\" (UID: \"acc2a3ba-8a71-460c-979b-704ea09aa117\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.710105 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76479b6979-x64kd"] Jan 22 16:42:08 crc kubenswrapper[4704]: W0122 16:42:08.717117 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31ee5638_ee25_460d_ac71_44e5a9aafc9b.slice/crio-ed0c12d63fccfe7c55bfff34ae7dfd9c65ee50ead615d408f2a4f08f77d763cd WatchSource:0}: Error finding container ed0c12d63fccfe7c55bfff34ae7dfd9c65ee50ead615d408f2a4f08f77d763cd: Status 404 returned error can't find the container with id ed0c12d63fccfe7c55bfff34ae7dfd9c65ee50ead615d408f2a4f08f77d763cd Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.829746 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" event={"ID":"b0c7e587-8794-4b01-ae39-83cb29c3c4c6","Type":"ContainerStarted","Data":"bb225986ae5536f729cea36c0f8a47caec9b96329bfd94112f03c3894622d43f"} Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.831477 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76479b6979-x64kd" event={"ID":"31ee5638-ee25-460d-ac71-44e5a9aafc9b","Type":"ContainerStarted","Data":"ed0c12d63fccfe7c55bfff34ae7dfd9c65ee50ead615d408f2a4f08f77d763cd"} Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.833155 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" event={"ID":"88d23917-02a3-4eba-94a8-50b5e3aa06a4","Type":"ContainerStarted","Data":"1c6e5d6bd8bbc7d33ab7cdc8d0b6c30a7c46a59f5ec6a635b472af9caf93c571"} Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.834782 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-28zp7" event={"ID":"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc","Type":"ContainerStarted","Data":"c723cbbe93592f5a5b3e9dd9e172a9c6784f999fdaadff2c0f45ef86f8b50b48"} Jan 22 16:42:08 crc kubenswrapper[4704]: I0122 16:42:08.957339 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" Jan 22 16:42:09 crc kubenswrapper[4704]: I0122 16:42:09.384283 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj"] Jan 22 16:42:09 crc kubenswrapper[4704]: W0122 16:42:09.392664 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacc2a3ba_8a71_460c_979b_704ea09aa117.slice/crio-4e55e182fae5a0feb02bf09e9a1726a13d10a10e8b04a045190f0c0b6e9ad17d WatchSource:0}: Error finding container 4e55e182fae5a0feb02bf09e9a1726a13d10a10e8b04a045190f0c0b6e9ad17d: Status 404 returned error can't find the container with id 4e55e182fae5a0feb02bf09e9a1726a13d10a10e8b04a045190f0c0b6e9ad17d Jan 22 16:42:09 crc kubenswrapper[4704]: I0122 16:42:09.854207 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" event={"ID":"acc2a3ba-8a71-460c-979b-704ea09aa117","Type":"ContainerStarted","Data":"4e55e182fae5a0feb02bf09e9a1726a13d10a10e8b04a045190f0c0b6e9ad17d"} Jan 22 16:42:09 crc kubenswrapper[4704]: I0122 16:42:09.857553 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76479b6979-x64kd" event={"ID":"31ee5638-ee25-460d-ac71-44e5a9aafc9b","Type":"ContainerStarted","Data":"c7efb83ef3f1befbb64b04b9016030b2825ef234d9f4e6f610306ae4c2e72139"} Jan 22 16:42:09 crc kubenswrapper[4704]: I0122 16:42:09.880203 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76479b6979-x64kd" podStartSLOduration=1.8801862470000001 podStartE2EDuration="1.880186247s" podCreationTimestamp="2026-01-22 16:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:42:09.878980441 +0000 UTC m=+822.523527161" watchObservedRunningTime="2026-01-22 16:42:09.880186247 +0000 UTC m=+822.524732947" Jan 22 16:42:11 crc kubenswrapper[4704]: I0122 16:42:11.889587 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" event={"ID":"88d23917-02a3-4eba-94a8-50b5e3aa06a4","Type":"ContainerStarted","Data":"133a3aa1f09549275e76ad1d8adc17b3157e2153efb282dc70ddce86733d2cf3"} Jan 22 16:42:11 crc kubenswrapper[4704]: I0122 16:42:11.889755 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:11 crc kubenswrapper[4704]: I0122 16:42:11.892573 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" event={"ID":"b0c7e587-8794-4b01-ae39-83cb29c3c4c6","Type":"ContainerStarted","Data":"d8fe906a1cf64f6487480b69de3f49169f10e9020b991b953ffa2d2a1e6b8ba9"} Jan 22 16:42:11 crc kubenswrapper[4704]: I0122 16:42:11.917584 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" podStartSLOduration=2.686051003 podStartE2EDuration="4.917549971s" podCreationTimestamp="2026-01-22 16:42:07 +0000 UTC" firstStartedPulling="2026-01-22 16:42:08.561204166 +0000 UTC m=+821.205750866" lastFinishedPulling="2026-01-22 16:42:10.792703104 +0000 UTC m=+823.437249834" observedRunningTime="2026-01-22 16:42:11.915889962 +0000 UTC m=+824.560436672" watchObservedRunningTime="2026-01-22 16:42:11.917549971 +0000 UTC m=+824.562096711" Jan 22 16:42:12 crc kubenswrapper[4704]: I0122 16:42:12.289001 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:42:12 crc kubenswrapper[4704]: I0122 16:42:12.289474 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:42:12 crc kubenswrapper[4704]: I0122 16:42:12.370494 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:42:12 crc kubenswrapper[4704]: I0122 16:42:12.902448 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-28zp7" event={"ID":"07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc","Type":"ContainerStarted","Data":"20d70852cb02af4a2d8bf47156e0cda2fee528c766a63ba5f90b5d5b53b4864f"} Jan 22 16:42:12 crc kubenswrapper[4704]: I0122 16:42:12.950770 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:42:12 crc kubenswrapper[4704]: I0122 16:42:12.968223 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-28zp7" podStartSLOduration=3.495040307 podStartE2EDuration="5.96819346s" podCreationTimestamp="2026-01-22 16:42:07 +0000 UTC" firstStartedPulling="2026-01-22 16:42:08.328151374 +0000 UTC m=+820.972698064" lastFinishedPulling="2026-01-22 16:42:10.801304487 +0000 UTC m=+823.445851217" observedRunningTime="2026-01-22 16:42:12.920425572 +0000 UTC m=+825.564972302" watchObservedRunningTime="2026-01-22 16:42:12.96819346 +0000 UTC m=+825.612740160" Jan 22 16:42:12 crc kubenswrapper[4704]: I0122 16:42:12.995155 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gbm6"] Jan 22 16:42:13 crc kubenswrapper[4704]: I0122 16:42:13.264375 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:13 crc kubenswrapper[4704]: I0122 16:42:13.910157 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" event={"ID":"acc2a3ba-8a71-460c-979b-704ea09aa117","Type":"ContainerStarted","Data":"14ac4772a0207834c4db958548858117dd9370804d08e3b7b9ee687f7bbdc96b"} Jan 22 16:42:13 crc kubenswrapper[4704]: I0122 16:42:13.918911 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" event={"ID":"b0c7e587-8794-4b01-ae39-83cb29c3c4c6","Type":"ContainerStarted","Data":"6b0c3e56d7054f23da24f56223777eee5c873049ecea3c701c1d58e154c6b8ed"} Jan 22 16:42:13 crc kubenswrapper[4704]: I0122 16:42:13.930326 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6qvtj" podStartSLOduration=2.294902365 podStartE2EDuration="5.930301009s" podCreationTimestamp="2026-01-22 16:42:08 +0000 UTC" firstStartedPulling="2026-01-22 16:42:09.395031231 +0000 UTC m=+822.039577921" lastFinishedPulling="2026-01-22 16:42:13.030429865 +0000 UTC m=+825.674976565" observedRunningTime="2026-01-22 16:42:13.923483498 +0000 UTC m=+826.568030198" watchObservedRunningTime="2026-01-22 16:42:13.930301009 +0000 UTC m=+826.574847719" Jan 22 16:42:13 crc kubenswrapper[4704]: I0122 16:42:13.947146 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hd6sv" podStartSLOduration=1.726822209 podStartE2EDuration="6.947123265s" podCreationTimestamp="2026-01-22 16:42:07 +0000 UTC" firstStartedPulling="2026-01-22 16:42:08.489018157 +0000 UTC m=+821.133564857" lastFinishedPulling="2026-01-22 16:42:13.709319203 +0000 UTC m=+826.353865913" observedRunningTime="2026-01-22 16:42:13.943139958 +0000 UTC m=+826.587686668" watchObservedRunningTime="2026-01-22 16:42:13.947123265 +0000 UTC m=+826.591669965" Jan 22 16:42:14 crc kubenswrapper[4704]: I0122 16:42:14.927072 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9gbm6" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="registry-server" containerID="cri-o://193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c" gracePeriod=2 Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.344479 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.468470 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-utilities\") pod \"c5ae5827-d602-4f89-9e1f-96068605ebee\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.468566 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-catalog-content\") pod \"c5ae5827-d602-4f89-9e1f-96068605ebee\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.468621 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzvf9\" (UniqueName: \"kubernetes.io/projected/c5ae5827-d602-4f89-9e1f-96068605ebee-kube-api-access-lzvf9\") pod \"c5ae5827-d602-4f89-9e1f-96068605ebee\" (UID: \"c5ae5827-d602-4f89-9e1f-96068605ebee\") " Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.469683 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-utilities" (OuterVolumeSpecName: "utilities") pod "c5ae5827-d602-4f89-9e1f-96068605ebee" (UID: "c5ae5827-d602-4f89-9e1f-96068605ebee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.476007 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ae5827-d602-4f89-9e1f-96068605ebee-kube-api-access-lzvf9" (OuterVolumeSpecName: "kube-api-access-lzvf9") pod "c5ae5827-d602-4f89-9e1f-96068605ebee" (UID: "c5ae5827-d602-4f89-9e1f-96068605ebee"). InnerVolumeSpecName "kube-api-access-lzvf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.492585 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5ae5827-d602-4f89-9e1f-96068605ebee" (UID: "c5ae5827-d602-4f89-9e1f-96068605ebee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.569900 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.569931 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ae5827-d602-4f89-9e1f-96068605ebee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.569943 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzvf9\" (UniqueName: \"kubernetes.io/projected/c5ae5827-d602-4f89-9e1f-96068605ebee-kube-api-access-lzvf9\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.936968 4704 generic.go:334] "Generic (PLEG): container finished" podID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerID="193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c" exitCode=0 Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.937028 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gbm6" event={"ID":"c5ae5827-d602-4f89-9e1f-96068605ebee","Type":"ContainerDied","Data":"193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c"} Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.937055 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9gbm6" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.937084 4704 scope.go:117] "RemoveContainer" containerID="193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.937067 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9gbm6" event={"ID":"c5ae5827-d602-4f89-9e1f-96068605ebee","Type":"ContainerDied","Data":"7aaede76ba86ada7be8e552a29980d74cac3a3081590ff4aff1856487bdaa340"} Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.958104 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gbm6"] Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.963465 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9gbm6"] Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.968133 4704 scope.go:117] "RemoveContainer" containerID="cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa" Jan 22 16:42:15 crc kubenswrapper[4704]: I0122 16:42:15.991504 4704 scope.go:117] "RemoveContainer" containerID="f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075" Jan 22 16:42:16 crc kubenswrapper[4704]: I0122 16:42:16.016276 4704 scope.go:117] "RemoveContainer" containerID="193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c" Jan 22 16:42:16 crc kubenswrapper[4704]: E0122 16:42:16.017008 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c\": container with ID starting with 193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c not found: ID does not exist" containerID="193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c" Jan 22 16:42:16 crc kubenswrapper[4704]: I0122 16:42:16.017047 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c"} err="failed to get container status \"193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c\": rpc error: code = NotFound desc = could not find container \"193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c\": container with ID starting with 193a0d8134801507131e3c8c3887eb9d2ed6e46ea77e69616a061f66a336676c not found: ID does not exist" Jan 22 16:42:16 crc kubenswrapper[4704]: I0122 16:42:16.017076 4704 scope.go:117] "RemoveContainer" containerID="cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa" Jan 22 16:42:16 crc kubenswrapper[4704]: E0122 16:42:16.017615 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa\": container with ID starting with cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa not found: ID does not exist" containerID="cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa" Jan 22 16:42:16 crc kubenswrapper[4704]: I0122 16:42:16.017643 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa"} err="failed to get container status \"cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa\": rpc error: code = NotFound desc = could not find container \"cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa\": container with ID starting with cf158927451ee2859d03666b84c2578e8c383a55d7321da8d92a980877f9a8fa not found: ID does not exist" Jan 22 16:42:16 crc kubenswrapper[4704]: I0122 16:42:16.017665 4704 scope.go:117] "RemoveContainer" containerID="f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075" Jan 22 16:42:16 crc kubenswrapper[4704]: E0122 16:42:16.018107 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075\": container with ID starting with f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075 not found: ID does not exist" containerID="f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075" Jan 22 16:42:16 crc kubenswrapper[4704]: I0122 16:42:16.018126 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075"} err="failed to get container status \"f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075\": rpc error: code = NotFound desc = could not find container \"f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075\": container with ID starting with f12cf3d9143a916173c38f0a8daaae5597d543116a4c76b78917b1b48f159075 not found: ID does not exist" Jan 22 16:42:17 crc kubenswrapper[4704]: I0122 16:42:17.645179 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" path="/var/lib/kubelet/pods/c5ae5827-d602-4f89-9e1f-96068605ebee/volumes" Jan 22 16:42:18 crc kubenswrapper[4704]: I0122 16:42:18.297073 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-28zp7" Jan 22 16:42:18 crc kubenswrapper[4704]: I0122 16:42:18.518016 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:18 crc kubenswrapper[4704]: I0122 16:42:18.518096 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:18 crc kubenswrapper[4704]: I0122 16:42:18.523874 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:18 crc kubenswrapper[4704]: I0122 16:42:18.966899 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:42:19 crc kubenswrapper[4704]: I0122 16:42:19.028464 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-khgwd"] Jan 22 16:42:28 crc kubenswrapper[4704]: I0122 16:42:28.283554 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4rtd7" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.809255 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w"] Jan 22 16:42:41 crc kubenswrapper[4704]: E0122 16:42:41.809908 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="extract-utilities" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.809923 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="extract-utilities" Jan 22 16:42:41 crc kubenswrapper[4704]: E0122 16:42:41.809942 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="registry-server" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.809950 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="registry-server" Jan 22 16:42:41 crc kubenswrapper[4704]: E0122 16:42:41.809967 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="extract-content" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.809976 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="extract-content" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.810099 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae5827-d602-4f89-9e1f-96068605ebee" containerName="registry-server" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.810829 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.813125 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.821168 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w"] Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.876080 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spz2g\" (UniqueName: \"kubernetes.io/projected/29d5297c-3dd2-4a53-8945-3f6969c6085c-kube-api-access-spz2g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.876142 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.876201 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.977417 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spz2g\" (UniqueName: \"kubernetes.io/projected/29d5297c-3dd2-4a53-8945-3f6969c6085c-kube-api-access-spz2g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.977507 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.977599 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.978365 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.978539 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:41 crc kubenswrapper[4704]: I0122 16:42:41.999878 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spz2g\" (UniqueName: \"kubernetes.io/projected/29d5297c-3dd2-4a53-8945-3f6969c6085c-kube-api-access-spz2g\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:42 crc kubenswrapper[4704]: I0122 16:42:42.138895 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:42 crc kubenswrapper[4704]: I0122 16:42:42.380009 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w"] Jan 22 16:42:43 crc kubenswrapper[4704]: I0122 16:42:43.153843 4704 generic.go:334] "Generic (PLEG): container finished" podID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerID="89bce39c5137b02b0174e3478a3a83ae003c90debe6366f42c7547056f3facb2" exitCode=0 Jan 22 16:42:43 crc kubenswrapper[4704]: I0122 16:42:43.153931 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" event={"ID":"29d5297c-3dd2-4a53-8945-3f6969c6085c","Type":"ContainerDied","Data":"89bce39c5137b02b0174e3478a3a83ae003c90debe6366f42c7547056f3facb2"} Jan 22 16:42:43 crc kubenswrapper[4704]: I0122 16:42:43.154142 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" event={"ID":"29d5297c-3dd2-4a53-8945-3f6969c6085c","Type":"ContainerStarted","Data":"2bfb1089269e10b9acb88417a622550b6057cab5c79b3fa8250fa260b2559f6e"} Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.076567 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-khgwd" podUID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" containerName="console" containerID="cri-o://a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452" gracePeriod=15 Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.528259 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-khgwd_5ba602c9-6155-46ca-baa1-0cfcd35cab16/console/0.log" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.528574 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.630293 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-oauth-config\") pod \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.630345 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-serving-cert\") pod \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.630371 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-trusted-ca-bundle\") pod \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.630394 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-service-ca\") pod \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.630448 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-config\") pod \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.630590 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-oauth-serving-cert\") pod \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.631260 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-service-ca" (OuterVolumeSpecName: "service-ca") pod "5ba602c9-6155-46ca-baa1-0cfcd35cab16" (UID: "5ba602c9-6155-46ca-baa1-0cfcd35cab16"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.631278 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5ba602c9-6155-46ca-baa1-0cfcd35cab16" (UID: "5ba602c9-6155-46ca-baa1-0cfcd35cab16"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.631354 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58xcv\" (UniqueName: \"kubernetes.io/projected/5ba602c9-6155-46ca-baa1-0cfcd35cab16-kube-api-access-58xcv\") pod \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\" (UID: \"5ba602c9-6155-46ca-baa1-0cfcd35cab16\") " Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.631766 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-config" (OuterVolumeSpecName: "console-config") pod "5ba602c9-6155-46ca-baa1-0cfcd35cab16" (UID: "5ba602c9-6155-46ca-baa1-0cfcd35cab16"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.631829 4704 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.631853 4704 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.632134 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5ba602c9-6155-46ca-baa1-0cfcd35cab16" (UID: "5ba602c9-6155-46ca-baa1-0cfcd35cab16"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.638147 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5ba602c9-6155-46ca-baa1-0cfcd35cab16" (UID: "5ba602c9-6155-46ca-baa1-0cfcd35cab16"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.638495 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba602c9-6155-46ca-baa1-0cfcd35cab16-kube-api-access-58xcv" (OuterVolumeSpecName: "kube-api-access-58xcv") pod "5ba602c9-6155-46ca-baa1-0cfcd35cab16" (UID: "5ba602c9-6155-46ca-baa1-0cfcd35cab16"). InnerVolumeSpecName "kube-api-access-58xcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.638599 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5ba602c9-6155-46ca-baa1-0cfcd35cab16" (UID: "5ba602c9-6155-46ca-baa1-0cfcd35cab16"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.733493 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58xcv\" (UniqueName: \"kubernetes.io/projected/5ba602c9-6155-46ca-baa1-0cfcd35cab16-kube-api-access-58xcv\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.733996 4704 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.734018 4704 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.734037 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:44 crc kubenswrapper[4704]: I0122 16:42:44.734055 4704 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ba602c9-6155-46ca-baa1-0cfcd35cab16-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.170369 4704 generic.go:334] "Generic (PLEG): container finished" podID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerID="05b0ba68d01f5a22732bc6daa2a9055c35de09eaa68b83357f386962cfb22a62" exitCode=0 Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.170460 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" event={"ID":"29d5297c-3dd2-4a53-8945-3f6969c6085c","Type":"ContainerDied","Data":"05b0ba68d01f5a22732bc6daa2a9055c35de09eaa68b83357f386962cfb22a62"} Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.175087 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-khgwd_5ba602c9-6155-46ca-baa1-0cfcd35cab16/console/0.log" Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.175129 4704 generic.go:334] "Generic (PLEG): container finished" podID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" containerID="a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452" exitCode=2 Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.175158 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khgwd" event={"ID":"5ba602c9-6155-46ca-baa1-0cfcd35cab16","Type":"ContainerDied","Data":"a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452"} Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.175186 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-khgwd" event={"ID":"5ba602c9-6155-46ca-baa1-0cfcd35cab16","Type":"ContainerDied","Data":"164a1094c520e39c5d6eb2c6b5b2a002a0c818bda5a777af43f41cbc090212a3"} Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.175205 4704 scope.go:117] "RemoveContainer" containerID="a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452" Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.175352 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-khgwd" Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.204405 4704 scope.go:117] "RemoveContainer" containerID="a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452" Jan 22 16:42:45 crc kubenswrapper[4704]: E0122 16:42:45.205036 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452\": container with ID starting with a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452 not found: ID does not exist" containerID="a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452" Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.205124 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452"} err="failed to get container status \"a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452\": rpc error: code = NotFound desc = could not find container \"a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452\": container with ID starting with a2b2c53ec6df588861206cba9912c2d7bf649b151f86f898aa55288c6d517452 not found: ID does not exist" Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.223122 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-khgwd"] Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.227285 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-khgwd"] Jan 22 16:42:45 crc kubenswrapper[4704]: I0122 16:42:45.653828 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" path="/var/lib/kubelet/pods/5ba602c9-6155-46ca-baa1-0cfcd35cab16/volumes" Jan 22 16:42:47 crc kubenswrapper[4704]: I0122 16:42:47.195360 4704 generic.go:334] "Generic (PLEG): container finished" podID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerID="d87cfb94fa87732889c4ac7b19f6d13eb3518d0982d510cf69ae4c3c053bf6f2" exitCode=0 Jan 22 16:42:47 crc kubenswrapper[4704]: I0122 16:42:47.195417 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" event={"ID":"29d5297c-3dd2-4a53-8945-3f6969c6085c","Type":"ContainerDied","Data":"d87cfb94fa87732889c4ac7b19f6d13eb3518d0982d510cf69ae4c3c053bf6f2"} Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.419388 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.499707 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-util\") pod \"29d5297c-3dd2-4a53-8945-3f6969c6085c\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.499894 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-bundle\") pod \"29d5297c-3dd2-4a53-8945-3f6969c6085c\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.499942 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spz2g\" (UniqueName: \"kubernetes.io/projected/29d5297c-3dd2-4a53-8945-3f6969c6085c-kube-api-access-spz2g\") pod \"29d5297c-3dd2-4a53-8945-3f6969c6085c\" (UID: \"29d5297c-3dd2-4a53-8945-3f6969c6085c\") " Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.503541 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-bundle" (OuterVolumeSpecName: "bundle") pod "29d5297c-3dd2-4a53-8945-3f6969c6085c" (UID: "29d5297c-3dd2-4a53-8945-3f6969c6085c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.506740 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d5297c-3dd2-4a53-8945-3f6969c6085c-kube-api-access-spz2g" (OuterVolumeSpecName: "kube-api-access-spz2g") pod "29d5297c-3dd2-4a53-8945-3f6969c6085c" (UID: "29d5297c-3dd2-4a53-8945-3f6969c6085c"). InnerVolumeSpecName "kube-api-access-spz2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.512592 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-util" (OuterVolumeSpecName: "util") pod "29d5297c-3dd2-4a53-8945-3f6969c6085c" (UID: "29d5297c-3dd2-4a53-8945-3f6969c6085c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.601651 4704 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.601694 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spz2g\" (UniqueName: \"kubernetes.io/projected/29d5297c-3dd2-4a53-8945-3f6969c6085c-kube-api-access-spz2g\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:48 crc kubenswrapper[4704]: I0122 16:42:48.601707 4704 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29d5297c-3dd2-4a53-8945-3f6969c6085c-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:42:49 crc kubenswrapper[4704]: I0122 16:42:49.091692 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:42:49 crc kubenswrapper[4704]: I0122 16:42:49.092097 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:42:49 crc kubenswrapper[4704]: I0122 16:42:49.217958 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" event={"ID":"29d5297c-3dd2-4a53-8945-3f6969c6085c","Type":"ContainerDied","Data":"2bfb1089269e10b9acb88417a622550b6057cab5c79b3fa8250fa260b2559f6e"} Jan 22 16:42:49 crc kubenswrapper[4704]: I0122 16:42:49.218035 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bfb1089269e10b9acb88417a622550b6057cab5c79b3fa8250fa260b2559f6e" Jan 22 16:42:49 crc kubenswrapper[4704]: I0122 16:42:49.218064 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.360137 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x"] Jan 22 16:42:57 crc kubenswrapper[4704]: E0122 16:42:57.361026 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerName="util" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.361041 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerName="util" Jan 22 16:42:57 crc kubenswrapper[4704]: E0122 16:42:57.361057 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerName="extract" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.361065 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerName="extract" Jan 22 16:42:57 crc kubenswrapper[4704]: E0122 16:42:57.361076 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" containerName="console" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.361084 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" containerName="console" Jan 22 16:42:57 crc kubenswrapper[4704]: E0122 16:42:57.361103 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerName="pull" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.361111 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerName="pull" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.361243 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba602c9-6155-46ca-baa1-0cfcd35cab16" containerName="console" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.361263 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d5297c-3dd2-4a53-8945-3f6969c6085c" containerName="extract" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.361775 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.363505 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.363544 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.363506 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.363599 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.364060 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6bdpf" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.374376 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x"] Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.420172 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z62mx\" (UniqueName: \"kubernetes.io/projected/ba584155-01b6-46e0-b1df-a5444d77bb39-kube-api-access-z62mx\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.420327 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba584155-01b6-46e0-b1df-a5444d77bb39-apiservice-cert\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.420444 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba584155-01b6-46e0-b1df-a5444d77bb39-webhook-cert\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.521158 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba584155-01b6-46e0-b1df-a5444d77bb39-apiservice-cert\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.521240 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba584155-01b6-46e0-b1df-a5444d77bb39-webhook-cert\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.521322 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z62mx\" (UniqueName: \"kubernetes.io/projected/ba584155-01b6-46e0-b1df-a5444d77bb39-kube-api-access-z62mx\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.530620 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba584155-01b6-46e0-b1df-a5444d77bb39-apiservice-cert\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.532357 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba584155-01b6-46e0-b1df-a5444d77bb39-webhook-cert\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.542599 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z62mx\" (UniqueName: \"kubernetes.io/projected/ba584155-01b6-46e0-b1df-a5444d77bb39-kube-api-access-z62mx\") pod \"metallb-operator-controller-manager-5b4f96dd45-fsw4x\" (UID: \"ba584155-01b6-46e0-b1df-a5444d77bb39\") " pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.672093 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq"] Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.673217 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.674926 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.675332 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.676084 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vr9xs" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.676519 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.697488 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq"] Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.727388 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-apiservice-cert\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.727440 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v626k\" (UniqueName: \"kubernetes.io/projected/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-kube-api-access-v626k\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.727471 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-webhook-cert\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.828349 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-apiservice-cert\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.828410 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v626k\" (UniqueName: \"kubernetes.io/projected/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-kube-api-access-v626k\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.828433 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-webhook-cert\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.838355 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-apiservice-cert\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.841516 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-webhook-cert\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.847030 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v626k\" (UniqueName: \"kubernetes.io/projected/7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f-kube-api-access-v626k\") pod \"metallb-operator-webhook-server-5868d7bb64-nb9lq\" (UID: \"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f\") " pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:57 crc kubenswrapper[4704]: I0122 16:42:57.990244 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:42:58 crc kubenswrapper[4704]: I0122 16:42:58.192166 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x"] Jan 22 16:42:58 crc kubenswrapper[4704]: I0122 16:42:58.304026 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" event={"ID":"ba584155-01b6-46e0-b1df-a5444d77bb39","Type":"ContainerStarted","Data":"c8f562d473d48956cd90c40c7601c90302ab9d4ae9bf3a6e74af986cad6ab262"} Jan 22 16:42:58 crc kubenswrapper[4704]: I0122 16:42:58.340764 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq"] Jan 22 16:42:58 crc kubenswrapper[4704]: W0122 16:42:58.350889 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ab62bf8_d0d1_4f4c_ab39_4aa838a8587f.slice/crio-bf44a1d51afcbd828e81d5992df9a5913a0880b705b4b31c5aed18d775156b68 WatchSource:0}: Error finding container bf44a1d51afcbd828e81d5992df9a5913a0880b705b4b31c5aed18d775156b68: Status 404 returned error can't find the container with id bf44a1d51afcbd828e81d5992df9a5913a0880b705b4b31c5aed18d775156b68 Jan 22 16:42:59 crc kubenswrapper[4704]: I0122 16:42:59.311436 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" event={"ID":"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f","Type":"ContainerStarted","Data":"bf44a1d51afcbd828e81d5992df9a5913a0880b705b4b31c5aed18d775156b68"} Jan 22 16:43:03 crc kubenswrapper[4704]: I0122 16:43:03.347108 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" event={"ID":"7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f","Type":"ContainerStarted","Data":"67db66310d99be87ab09cbbb9af3e687bb1f3b9c32410ee9e4d533935a34c76f"} Jan 22 16:43:03 crc kubenswrapper[4704]: I0122 16:43:03.347436 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:43:03 crc kubenswrapper[4704]: I0122 16:43:03.348841 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" event={"ID":"ba584155-01b6-46e0-b1df-a5444d77bb39","Type":"ContainerStarted","Data":"e77037c69b3aecc3d20227bddbc7b53158ef724c63b142e7378c465b01029997"} Jan 22 16:43:03 crc kubenswrapper[4704]: I0122 16:43:03.348994 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:43:03 crc kubenswrapper[4704]: I0122 16:43:03.368458 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" podStartSLOduration=1.6722123180000001 podStartE2EDuration="6.368438515s" podCreationTimestamp="2026-01-22 16:42:57 +0000 UTC" firstStartedPulling="2026-01-22 16:42:58.356491508 +0000 UTC m=+871.001038208" lastFinishedPulling="2026-01-22 16:43:03.052717705 +0000 UTC m=+875.697264405" observedRunningTime="2026-01-22 16:43:03.365011056 +0000 UTC m=+876.009557756" watchObservedRunningTime="2026-01-22 16:43:03.368438515 +0000 UTC m=+876.012985215" Jan 22 16:43:03 crc kubenswrapper[4704]: I0122 16:43:03.391014 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" podStartSLOduration=3.079010813 podStartE2EDuration="6.390995627s" podCreationTimestamp="2026-01-22 16:42:57 +0000 UTC" firstStartedPulling="2026-01-22 16:42:58.202272408 +0000 UTC m=+870.846819108" lastFinishedPulling="2026-01-22 16:43:01.514257222 +0000 UTC m=+874.158803922" observedRunningTime="2026-01-22 16:43:03.388615558 +0000 UTC m=+876.033162258" watchObservedRunningTime="2026-01-22 16:43:03.390995627 +0000 UTC m=+876.035542327" Jan 22 16:43:17 crc kubenswrapper[4704]: I0122 16:43:17.995806 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5868d7bb64-nb9lq" Jan 22 16:43:19 crc kubenswrapper[4704]: I0122 16:43:19.086599 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:43:19 crc kubenswrapper[4704]: I0122 16:43:19.087000 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:43:37 crc kubenswrapper[4704]: I0122 16:43:37.682715 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b4f96dd45-fsw4x" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.518836 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx"] Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.541124 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-48bzl"] Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.541401 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.560315 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.560411 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wd2bk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.565686 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx"] Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.565852 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.570549 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.573124 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.600016 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmhj6\" (UniqueName: \"kubernetes.io/projected/2693c567-580c-4c07-a470-639f63bc75aa-kube-api-access-vmhj6\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.600249 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2693c567-580c-4c07-a470-639f63bc75aa-metrics-certs\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.600342 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6m7x\" (UniqueName: \"kubernetes.io/projected/c49eb63d-b748-4048-b834-c33235bbc9b6-kube-api-access-g6m7x\") pod \"frr-k8s-webhook-server-7df86c4f6c-78tsx\" (UID: \"c49eb63d-b748-4048-b834-c33235bbc9b6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.600419 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c49eb63d-b748-4048-b834-c33235bbc9b6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-78tsx\" (UID: \"c49eb63d-b748-4048-b834-c33235bbc9b6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.600727 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-frr-conf\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.600997 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-frr-sockets\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.601171 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-metrics\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.601264 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-reloader\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.601346 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2693c567-580c-4c07-a470-639f63bc75aa-frr-startup\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.614449 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bhblk"] Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.615648 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.620223 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.620468 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.620611 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.620834 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tdrbz" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.642268 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-ds86s"] Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.643286 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.647181 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.654932 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-ds86s"] Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703040 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmhj6\" (UniqueName: \"kubernetes.io/projected/2693c567-580c-4c07-a470-639f63bc75aa-kube-api-access-vmhj6\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703083 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6df\" (UniqueName: \"kubernetes.io/projected/c3d18830-eb73-458a-aa2f-fd3bf430d009-kube-api-access-mt6df\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703128 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2693c567-580c-4c07-a470-639f63bc75aa-metrics-certs\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703155 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6m7x\" (UniqueName: \"kubernetes.io/projected/c49eb63d-b748-4048-b834-c33235bbc9b6-kube-api-access-g6m7x\") pod \"frr-k8s-webhook-server-7df86c4f6c-78tsx\" (UID: \"c49eb63d-b748-4048-b834-c33235bbc9b6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703184 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c49eb63d-b748-4048-b834-c33235bbc9b6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-78tsx\" (UID: \"c49eb63d-b748-4048-b834-c33235bbc9b6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703200 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e1b8-9820-4023-bec6-9337958b2ffb-metrics-certs\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703222 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-frr-conf\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703244 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703263 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-frr-sockets\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: E0122 16:43:38.703279 4704 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 22 16:43:38 crc kubenswrapper[4704]: E0122 16:43:38.703371 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2693c567-580c-4c07-a470-639f63bc75aa-metrics-certs podName:2693c567-580c-4c07-a470-639f63bc75aa nodeName:}" failed. No retries permitted until 2026-01-22 16:43:39.203353677 +0000 UTC m=+911.847900377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2693c567-580c-4c07-a470-639f63bc75aa-metrics-certs") pod "frr-k8s-48bzl" (UID: "2693c567-580c-4c07-a470-639f63bc75aa") : secret "frr-k8s-certs-secret" not found Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703641 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-metrics\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704060 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-frr-conf\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.703298 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-metrics\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704253 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-frr-sockets\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704269 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3d18830-eb73-458a-aa2f-fd3bf430d009-metallb-excludel2\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704324 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-reloader\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704345 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0d5e1b8-9820-4023-bec6-9337958b2ffb-cert\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: E0122 16:43:38.704360 4704 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 22 16:43:38 crc kubenswrapper[4704]: E0122 16:43:38.704399 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c49eb63d-b748-4048-b834-c33235bbc9b6-cert podName:c49eb63d-b748-4048-b834-c33235bbc9b6 nodeName:}" failed. No retries permitted until 2026-01-22 16:43:39.204385077 +0000 UTC m=+911.848931777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c49eb63d-b748-4048-b834-c33235bbc9b6-cert") pod "frr-k8s-webhook-server-7df86c4f6c-78tsx" (UID: "c49eb63d-b748-4048-b834-c33235bbc9b6") : secret "frr-k8s-webhook-server-cert" not found Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704422 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-metrics-certs\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704479 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2693c567-580c-4c07-a470-639f63bc75aa-frr-startup\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704499 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhsw5\" (UniqueName: \"kubernetes.io/projected/c0d5e1b8-9820-4023-bec6-9337958b2ffb-kube-api-access-bhsw5\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.704604 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2693c567-580c-4c07-a470-639f63bc75aa-reloader\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.705221 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2693c567-580c-4c07-a470-639f63bc75aa-frr-startup\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.724366 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6m7x\" (UniqueName: \"kubernetes.io/projected/c49eb63d-b748-4048-b834-c33235bbc9b6-kube-api-access-g6m7x\") pod \"frr-k8s-webhook-server-7df86c4f6c-78tsx\" (UID: \"c49eb63d-b748-4048-b834-c33235bbc9b6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.733265 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmhj6\" (UniqueName: \"kubernetes.io/projected/2693c567-580c-4c07-a470-639f63bc75aa-kube-api-access-vmhj6\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806047 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3d18830-eb73-458a-aa2f-fd3bf430d009-metallb-excludel2\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806114 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0d5e1b8-9820-4023-bec6-9337958b2ffb-cert\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806155 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-metrics-certs\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806184 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhsw5\" (UniqueName: \"kubernetes.io/projected/c0d5e1b8-9820-4023-bec6-9337958b2ffb-kube-api-access-bhsw5\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806218 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt6df\" (UniqueName: \"kubernetes.io/projected/c3d18830-eb73-458a-aa2f-fd3bf430d009-kube-api-access-mt6df\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806284 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e1b8-9820-4023-bec6-9337958b2ffb-metrics-certs\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806310 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: E0122 16:43:38.806465 4704 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 16:43:38 crc kubenswrapper[4704]: E0122 16:43:38.806524 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist podName:c3d18830-eb73-458a-aa2f-fd3bf430d009 nodeName:}" failed. No retries permitted until 2026-01-22 16:43:39.306506938 +0000 UTC m=+911.951053638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist") pod "speaker-bhblk" (UID: "c3d18830-eb73-458a-aa2f-fd3bf430d009") : secret "metallb-memberlist" not found Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.806881 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c3d18830-eb73-458a-aa2f-fd3bf430d009-metallb-excludel2\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.809923 4704 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.810211 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-metrics-certs\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.810450 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0d5e1b8-9820-4023-bec6-9337958b2ffb-metrics-certs\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.820190 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0d5e1b8-9820-4023-bec6-9337958b2ffb-cert\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.822195 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt6df\" (UniqueName: \"kubernetes.io/projected/c3d18830-eb73-458a-aa2f-fd3bf430d009-kube-api-access-mt6df\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.825436 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhsw5\" (UniqueName: \"kubernetes.io/projected/c0d5e1b8-9820-4023-bec6-9337958b2ffb-kube-api-access-bhsw5\") pod \"controller-6968d8fdc4-ds86s\" (UID: \"c0d5e1b8-9820-4023-bec6-9337958b2ffb\") " pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:38 crc kubenswrapper[4704]: I0122 16:43:38.956293 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.211761 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2693c567-580c-4c07-a470-639f63bc75aa-metrics-certs\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.212127 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c49eb63d-b748-4048-b834-c33235bbc9b6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-78tsx\" (UID: \"c49eb63d-b748-4048-b834-c33235bbc9b6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.217483 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2693c567-580c-4c07-a470-639f63bc75aa-metrics-certs\") pod \"frr-k8s-48bzl\" (UID: \"2693c567-580c-4c07-a470-639f63bc75aa\") " pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.217637 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c49eb63d-b748-4048-b834-c33235bbc9b6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-78tsx\" (UID: \"c49eb63d-b748-4048-b834-c33235bbc9b6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.313142 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:39 crc kubenswrapper[4704]: E0122 16:43:39.313365 4704 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 16:43:39 crc kubenswrapper[4704]: E0122 16:43:39.313472 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist podName:c3d18830-eb73-458a-aa2f-fd3bf430d009 nodeName:}" failed. No retries permitted until 2026-01-22 16:43:40.313450099 +0000 UTC m=+912.957996799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist") pod "speaker-bhblk" (UID: "c3d18830-eb73-458a-aa2f-fd3bf430d009") : secret "metallb-memberlist" not found Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.425174 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-ds86s"] Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.497571 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.512182 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.618591 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-ds86s" event={"ID":"c0d5e1b8-9820-4023-bec6-9337958b2ffb","Type":"ContainerStarted","Data":"ea18ce83e60fbef224301e90a81277cd0a5cbe258f0bfc5aa18ce9fea61ea566"} Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.618664 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-ds86s" event={"ID":"c0d5e1b8-9820-4023-bec6-9337958b2ffb","Type":"ContainerStarted","Data":"4bb1689d5c05f8969a01207f127a8413b61da0cb146bbedea5063769b9584baf"} Jan 22 16:43:39 crc kubenswrapper[4704]: I0122 16:43:39.937138 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx"] Jan 22 16:43:39 crc kubenswrapper[4704]: W0122 16:43:39.944241 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc49eb63d_b748_4048_b834_c33235bbc9b6.slice/crio-0d11bd79d50eba1c5ad74052dd9ef38e9c8a32f50084fe86fc96ba39f4c4768d WatchSource:0}: Error finding container 0d11bd79d50eba1c5ad74052dd9ef38e9c8a32f50084fe86fc96ba39f4c4768d: Status 404 returned error can't find the container with id 0d11bd79d50eba1c5ad74052dd9ef38e9c8a32f50084fe86fc96ba39f4c4768d Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.326890 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.342737 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c3d18830-eb73-458a-aa2f-fd3bf430d009-memberlist\") pod \"speaker-bhblk\" (UID: \"c3d18830-eb73-458a-aa2f-fd3bf430d009\") " pod="metallb-system/speaker-bhblk" Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.431381 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bhblk" Jan 22 16:43:40 crc kubenswrapper[4704]: W0122 16:43:40.450535 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d18830_eb73_458a_aa2f_fd3bf430d009.slice/crio-7bd3e0832fef69b4324264ad1444ce28b78da7893b5b91c967c4fd4850a975d7 WatchSource:0}: Error finding container 7bd3e0832fef69b4324264ad1444ce28b78da7893b5b91c967c4fd4850a975d7: Status 404 returned error can't find the container with id 7bd3e0832fef69b4324264ad1444ce28b78da7893b5b91c967c4fd4850a975d7 Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.626131 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerStarted","Data":"45b54e6dda7496c3898b4dc38ceceddb70d50ec194ccc8bf2e8850b5e5c5f0a0"} Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.627543 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bhblk" event={"ID":"c3d18830-eb73-458a-aa2f-fd3bf430d009","Type":"ContainerStarted","Data":"7bd3e0832fef69b4324264ad1444ce28b78da7893b5b91c967c4fd4850a975d7"} Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.628598 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" event={"ID":"c49eb63d-b748-4048-b834-c33235bbc9b6","Type":"ContainerStarted","Data":"0d11bd79d50eba1c5ad74052dd9ef38e9c8a32f50084fe86fc96ba39f4c4768d"} Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.630036 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-ds86s" event={"ID":"c0d5e1b8-9820-4023-bec6-9337958b2ffb","Type":"ContainerStarted","Data":"0f3781709d957a434476db85b3974601bd1aecea7a78332b1f176b542f753a62"} Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.630934 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:40 crc kubenswrapper[4704]: I0122 16:43:40.653432 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-ds86s" podStartSLOduration=2.653411858 podStartE2EDuration="2.653411858s" podCreationTimestamp="2026-01-22 16:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:43:40.647951699 +0000 UTC m=+913.292498419" watchObservedRunningTime="2026-01-22 16:43:40.653411858 +0000 UTC m=+913.297958558" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.139782 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vn8r5"] Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.141406 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.166745 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vn8r5"] Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.251081 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-utilities\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.251133 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrb4\" (UniqueName: \"kubernetes.io/projected/4bf3630f-973b-4b92-a377-64dec7a5675b-kube-api-access-ckrb4\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.251209 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-catalog-content\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.352394 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-catalog-content\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.352442 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-utilities\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.352469 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckrb4\" (UniqueName: \"kubernetes.io/projected/4bf3630f-973b-4b92-a377-64dec7a5675b-kube-api-access-ckrb4\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.353302 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-catalog-content\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.353463 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-utilities\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.382627 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckrb4\" (UniqueName: \"kubernetes.io/projected/4bf3630f-973b-4b92-a377-64dec7a5675b-kube-api-access-ckrb4\") pod \"certified-operators-vn8r5\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.457616 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.644090 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bhblk" event={"ID":"c3d18830-eb73-458a-aa2f-fd3bf430d009","Type":"ContainerStarted","Data":"7929c84355d325c580edac002a31506668833275c5a477aae90107b934993cdc"} Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.644340 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bhblk" event={"ID":"c3d18830-eb73-458a-aa2f-fd3bf430d009","Type":"ContainerStarted","Data":"34e4be1e504bc09a779389f88e0279a8b28291e0be9f8913f266b73e934e20d4"} Jan 22 16:43:41 crc kubenswrapper[4704]: I0122 16:43:41.671748 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bhblk" podStartSLOduration=3.6717329789999997 podStartE2EDuration="3.671732979s" podCreationTimestamp="2026-01-22 16:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:43:41.669553795 +0000 UTC m=+914.314100495" watchObservedRunningTime="2026-01-22 16:43:41.671732979 +0000 UTC m=+914.316279679" Jan 22 16:43:42 crc kubenswrapper[4704]: I0122 16:43:42.050700 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vn8r5"] Jan 22 16:43:42 crc kubenswrapper[4704]: W0122 16:43:42.059564 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bf3630f_973b_4b92_a377_64dec7a5675b.slice/crio-7d1c6cb4f75c1ca452faff3eed8490896f3b1afc2e04cc8c01e123563deaeee3 WatchSource:0}: Error finding container 7d1c6cb4f75c1ca452faff3eed8490896f3b1afc2e04cc8c01e123563deaeee3: Status 404 returned error can't find the container with id 7d1c6cb4f75c1ca452faff3eed8490896f3b1afc2e04cc8c01e123563deaeee3 Jan 22 16:43:42 crc kubenswrapper[4704]: I0122 16:43:42.649769 4704 generic.go:334] "Generic (PLEG): container finished" podID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerID="1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da" exitCode=0 Jan 22 16:43:42 crc kubenswrapper[4704]: I0122 16:43:42.649870 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vn8r5" event={"ID":"4bf3630f-973b-4b92-a377-64dec7a5675b","Type":"ContainerDied","Data":"1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da"} Jan 22 16:43:42 crc kubenswrapper[4704]: I0122 16:43:42.650127 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vn8r5" event={"ID":"4bf3630f-973b-4b92-a377-64dec7a5675b","Type":"ContainerStarted","Data":"7d1c6cb4f75c1ca452faff3eed8490896f3b1afc2e04cc8c01e123563deaeee3"} Jan 22 16:43:42 crc kubenswrapper[4704]: I0122 16:43:42.650331 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bhblk" Jan 22 16:43:43 crc kubenswrapper[4704]: I0122 16:43:43.660359 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vn8r5" event={"ID":"4bf3630f-973b-4b92-a377-64dec7a5675b","Type":"ContainerStarted","Data":"78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b"} Jan 22 16:43:44 crc kubenswrapper[4704]: I0122 16:43:44.669136 4704 generic.go:334] "Generic (PLEG): container finished" podID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerID="78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b" exitCode=0 Jan 22 16:43:44 crc kubenswrapper[4704]: I0122 16:43:44.669230 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vn8r5" event={"ID":"4bf3630f-973b-4b92-a377-64dec7a5675b","Type":"ContainerDied","Data":"78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b"} Jan 22 16:43:47 crc kubenswrapper[4704]: I0122 16:43:47.705463 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vn8r5" event={"ID":"4bf3630f-973b-4b92-a377-64dec7a5675b","Type":"ContainerStarted","Data":"e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3"} Jan 22 16:43:47 crc kubenswrapper[4704]: I0122 16:43:47.707074 4704 generic.go:334] "Generic (PLEG): container finished" podID="2693c567-580c-4c07-a470-639f63bc75aa" containerID="1c0a2c8e79f81d380fe10bcef5e52b6bbdc1e19dbb7e251914d68655c9d43b9d" exitCode=0 Jan 22 16:43:47 crc kubenswrapper[4704]: I0122 16:43:47.707188 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerDied","Data":"1c0a2c8e79f81d380fe10bcef5e52b6bbdc1e19dbb7e251914d68655c9d43b9d"} Jan 22 16:43:47 crc kubenswrapper[4704]: I0122 16:43:47.708631 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" event={"ID":"c49eb63d-b748-4048-b834-c33235bbc9b6","Type":"ContainerStarted","Data":"00db591990abd19788c63ec435c44e5e97fa1d67fb24f5fcde8ea6dc196eb6f4"} Jan 22 16:43:47 crc kubenswrapper[4704]: I0122 16:43:47.708834 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:47 crc kubenswrapper[4704]: I0122 16:43:47.735388 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vn8r5" podStartSLOduration=2.087270438 podStartE2EDuration="6.73536873s" podCreationTimestamp="2026-01-22 16:43:41 +0000 UTC" firstStartedPulling="2026-01-22 16:43:42.651201612 +0000 UTC m=+915.295748312" lastFinishedPulling="2026-01-22 16:43:47.299299904 +0000 UTC m=+919.943846604" observedRunningTime="2026-01-22 16:43:47.73259404 +0000 UTC m=+920.377140750" watchObservedRunningTime="2026-01-22 16:43:47.73536873 +0000 UTC m=+920.379915430" Jan 22 16:43:47 crc kubenswrapper[4704]: I0122 16:43:47.773730 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" podStartSLOduration=2.664565282 podStartE2EDuration="9.773710542s" podCreationTimestamp="2026-01-22 16:43:38 +0000 UTC" firstStartedPulling="2026-01-22 16:43:39.946809867 +0000 UTC m=+912.591356577" lastFinishedPulling="2026-01-22 16:43:47.055955127 +0000 UTC m=+919.700501837" observedRunningTime="2026-01-22 16:43:47.770481928 +0000 UTC m=+920.415028628" watchObservedRunningTime="2026-01-22 16:43:47.773710542 +0000 UTC m=+920.418257242" Jan 22 16:43:48 crc kubenswrapper[4704]: I0122 16:43:48.719882 4704 generic.go:334] "Generic (PLEG): container finished" podID="2693c567-580c-4c07-a470-639f63bc75aa" containerID="dab8972192c77b5c5df1def71618690b175451d1e20f4ff26b36621890fa373e" exitCode=0 Jan 22 16:43:48 crc kubenswrapper[4704]: I0122 16:43:48.719981 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerDied","Data":"dab8972192c77b5c5df1def71618690b175451d1e20f4ff26b36621890fa373e"} Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.086401 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.086777 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.086843 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.087448 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8865a0e2381cbeec53f87553007cf63e787be4f45fe167d5da2b4f406dd127d"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.087504 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://c8865a0e2381cbeec53f87553007cf63e787be4f45fe167d5da2b4f406dd127d" gracePeriod=600 Jan 22 16:43:49 crc kubenswrapper[4704]: E0122 16:43:49.176478 4704 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8e25829_99af_4717_87f3_43a79b9d8c26.slice/crio-c8865a0e2381cbeec53f87553007cf63e787be4f45fe167d5da2b4f406dd127d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2693c567_580c_4c07_a470_639f63bc75aa.slice/crio-5baeb214bb4081fac92b2b385567c6a60462630c7e11999b6156b634cd97c8c3.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.728467 4704 generic.go:334] "Generic (PLEG): container finished" podID="2693c567-580c-4c07-a470-639f63bc75aa" containerID="5baeb214bb4081fac92b2b385567c6a60462630c7e11999b6156b634cd97c8c3" exitCode=0 Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.728555 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerDied","Data":"5baeb214bb4081fac92b2b385567c6a60462630c7e11999b6156b634cd97c8c3"} Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.731179 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="c8865a0e2381cbeec53f87553007cf63e787be4f45fe167d5da2b4f406dd127d" exitCode=0 Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.731207 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"c8865a0e2381cbeec53f87553007cf63e787be4f45fe167d5da2b4f406dd127d"} Jan 22 16:43:49 crc kubenswrapper[4704]: I0122 16:43:49.731270 4704 scope.go:117] "RemoveContainer" containerID="c26a9735fc32abd042dcb8a6ea9f8f47b9946bfd125903a6c3f95bae0b5c2e0d" Jan 22 16:43:50 crc kubenswrapper[4704]: I0122 16:43:50.435090 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bhblk" Jan 22 16:43:50 crc kubenswrapper[4704]: I0122 16:43:50.743549 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerStarted","Data":"d39638141d737d7ad07b45317f6dda109cc8c645c5962e6d14afbb9a647ba503"} Jan 22 16:43:50 crc kubenswrapper[4704]: I0122 16:43:50.743908 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerStarted","Data":"bac92175e0c2d19d0f8f645539b47a4723cba099a426bbf17fc2d7d60d87f2fa"} Jan 22 16:43:50 crc kubenswrapper[4704]: I0122 16:43:50.743925 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerStarted","Data":"2494cc1d93eb21fd1e58c78c0e6f8ee1859e773e6dd92886791bbf914f0ace46"} Jan 22 16:43:50 crc kubenswrapper[4704]: I0122 16:43:50.743938 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerStarted","Data":"74512a75862afad3a9f6578dceb3f0ec3f15e14324e83ab62e08b5140affbdf2"} Jan 22 16:43:50 crc kubenswrapper[4704]: I0122 16:43:50.743949 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerStarted","Data":"7b96fd16e1507bfc7ccec9e7183c62ba8dd455ff7206597c64db1c90e533ac22"} Jan 22 16:43:50 crc kubenswrapper[4704]: I0122 16:43:50.747075 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"88cf191bb3e64eb833ed16834e1430c8c271d9cb96c329f4eba42d0922f7467f"} Jan 22 16:43:51 crc kubenswrapper[4704]: I0122 16:43:51.458276 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:51 crc kubenswrapper[4704]: I0122 16:43:51.458320 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:51 crc kubenswrapper[4704]: I0122 16:43:51.519840 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:43:51 crc kubenswrapper[4704]: I0122 16:43:51.760565 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-48bzl" event={"ID":"2693c567-580c-4c07-a470-639f63bc75aa","Type":"ContainerStarted","Data":"1729aae2d51100cc055dee3ff55cf782b4f9c9db6b685c8da45789ad4a5c3a54"} Jan 22 16:43:51 crc kubenswrapper[4704]: I0122 16:43:51.805506 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-48bzl" podStartSLOduration=6.421364676 podStartE2EDuration="13.805480061s" podCreationTimestamp="2026-01-22 16:43:38 +0000 UTC" firstStartedPulling="2026-01-22 16:43:39.641907454 +0000 UTC m=+912.286454154" lastFinishedPulling="2026-01-22 16:43:47.026022829 +0000 UTC m=+919.670569539" observedRunningTime="2026-01-22 16:43:51.802880846 +0000 UTC m=+924.447427566" watchObservedRunningTime="2026-01-22 16:43:51.805480061 +0000 UTC m=+924.450026771" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.068357 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr"] Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.069566 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.076047 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr"] Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.078764 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.110664 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.110771 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.110844 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6spf8\" (UniqueName: \"kubernetes.io/projected/365dc18e-4b90-48f3-9aa9-214fc97be804-kube-api-access-6spf8\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.211435 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.211493 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6spf8\" (UniqueName: \"kubernetes.io/projected/365dc18e-4b90-48f3-9aa9-214fc97be804-kube-api-access-6spf8\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.211526 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.212158 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.212261 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.241297 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6spf8\" (UniqueName: \"kubernetes.io/projected/365dc18e-4b90-48f3-9aa9-214fc97be804-kube-api-access-6spf8\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.383601 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.645450 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr"] Jan 22 16:43:52 crc kubenswrapper[4704]: W0122 16:43:52.657644 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod365dc18e_4b90_48f3_9aa9_214fc97be804.slice/crio-0be24283fafb31a1820b90b8ffecd7d34da87a2891a0e186b2da0f02e8692407 WatchSource:0}: Error finding container 0be24283fafb31a1820b90b8ffecd7d34da87a2891a0e186b2da0f02e8692407: Status 404 returned error can't find the container with id 0be24283fafb31a1820b90b8ffecd7d34da87a2891a0e186b2da0f02e8692407 Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.772068 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" event={"ID":"365dc18e-4b90-48f3-9aa9-214fc97be804","Type":"ContainerStarted","Data":"0be24283fafb31a1820b90b8ffecd7d34da87a2891a0e186b2da0f02e8692407"} Jan 22 16:43:52 crc kubenswrapper[4704]: I0122 16:43:52.772111 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:53 crc kubenswrapper[4704]: I0122 16:43:53.781755 4704 generic.go:334] "Generic (PLEG): container finished" podID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerID="c46fce96c1bdbb6aec4bac6c0608e7c5d135c8fa732d1704fb5d9f9076f632fa" exitCode=0 Jan 22 16:43:53 crc kubenswrapper[4704]: I0122 16:43:53.781860 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" event={"ID":"365dc18e-4b90-48f3-9aa9-214fc97be804","Type":"ContainerDied","Data":"c46fce96c1bdbb6aec4bac6c0608e7c5d135c8fa732d1704fb5d9f9076f632fa"} Jan 22 16:43:54 crc kubenswrapper[4704]: I0122 16:43:54.513384 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:54 crc kubenswrapper[4704]: I0122 16:43:54.555366 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-48bzl" Jan 22 16:43:57 crc kubenswrapper[4704]: I0122 16:43:57.813934 4704 generic.go:334] "Generic (PLEG): container finished" podID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerID="cdcb15cb1da0efe20ddf07017b0184c47c66cfb6b3de5416b08c5bb72f8a0bbf" exitCode=0 Jan 22 16:43:57 crc kubenswrapper[4704]: I0122 16:43:57.814033 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" event={"ID":"365dc18e-4b90-48f3-9aa9-214fc97be804","Type":"ContainerDied","Data":"cdcb15cb1da0efe20ddf07017b0184c47c66cfb6b3de5416b08c5bb72f8a0bbf"} Jan 22 16:43:58 crc kubenswrapper[4704]: I0122 16:43:58.829223 4704 generic.go:334] "Generic (PLEG): container finished" podID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerID="57d41b80b5abef19bff633741f0830f3be3ae193dd508937987712abaa64a2f2" exitCode=0 Jan 22 16:43:58 crc kubenswrapper[4704]: I0122 16:43:58.829314 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" event={"ID":"365dc18e-4b90-48f3-9aa9-214fc97be804","Type":"ContainerDied","Data":"57d41b80b5abef19bff633741f0830f3be3ae193dd508937987712abaa64a2f2"} Jan 22 16:43:58 crc kubenswrapper[4704]: I0122 16:43:58.962372 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-ds86s" Jan 22 16:43:59 crc kubenswrapper[4704]: I0122 16:43:59.503069 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-78tsx" Jan 22 16:43:59 crc kubenswrapper[4704]: I0122 16:43:59.519228 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-48bzl" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.171085 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.228277 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-bundle\") pod \"365dc18e-4b90-48f3-9aa9-214fc97be804\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.228319 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6spf8\" (UniqueName: \"kubernetes.io/projected/365dc18e-4b90-48f3-9aa9-214fc97be804-kube-api-access-6spf8\") pod \"365dc18e-4b90-48f3-9aa9-214fc97be804\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.228349 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-util\") pod \"365dc18e-4b90-48f3-9aa9-214fc97be804\" (UID: \"365dc18e-4b90-48f3-9aa9-214fc97be804\") " Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.229984 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-bundle" (OuterVolumeSpecName: "bundle") pod "365dc18e-4b90-48f3-9aa9-214fc97be804" (UID: "365dc18e-4b90-48f3-9aa9-214fc97be804"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.236963 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365dc18e-4b90-48f3-9aa9-214fc97be804-kube-api-access-6spf8" (OuterVolumeSpecName: "kube-api-access-6spf8") pod "365dc18e-4b90-48f3-9aa9-214fc97be804" (UID: "365dc18e-4b90-48f3-9aa9-214fc97be804"). InnerVolumeSpecName "kube-api-access-6spf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.238868 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-util" (OuterVolumeSpecName: "util") pod "365dc18e-4b90-48f3-9aa9-214fc97be804" (UID: "365dc18e-4b90-48f3-9aa9-214fc97be804"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.328966 4704 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.329000 4704 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/365dc18e-4b90-48f3-9aa9-214fc97be804-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.329010 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6spf8\" (UniqueName: \"kubernetes.io/projected/365dc18e-4b90-48f3-9aa9-214fc97be804-kube-api-access-6spf8\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.846527 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" event={"ID":"365dc18e-4b90-48f3-9aa9-214fc97be804","Type":"ContainerDied","Data":"0be24283fafb31a1820b90b8ffecd7d34da87a2891a0e186b2da0f02e8692407"} Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.846597 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be24283fafb31a1820b90b8ffecd7d34da87a2891a0e186b2da0f02e8692407" Jan 22 16:44:00 crc kubenswrapper[4704]: I0122 16:44:00.846605 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr" Jan 22 16:44:01 crc kubenswrapper[4704]: I0122 16:44:01.510225 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:44:01 crc kubenswrapper[4704]: I0122 16:44:01.561326 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vn8r5"] Jan 22 16:44:01 crc kubenswrapper[4704]: I0122 16:44:01.857561 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vn8r5" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="registry-server" containerID="cri-o://e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3" gracePeriod=2 Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.295490 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.462313 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckrb4\" (UniqueName: \"kubernetes.io/projected/4bf3630f-973b-4b92-a377-64dec7a5675b-kube-api-access-ckrb4\") pod \"4bf3630f-973b-4b92-a377-64dec7a5675b\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.462376 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-utilities\") pod \"4bf3630f-973b-4b92-a377-64dec7a5675b\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.462407 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-catalog-content\") pod \"4bf3630f-973b-4b92-a377-64dec7a5675b\" (UID: \"4bf3630f-973b-4b92-a377-64dec7a5675b\") " Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.463553 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-utilities" (OuterVolumeSpecName: "utilities") pod "4bf3630f-973b-4b92-a377-64dec7a5675b" (UID: "4bf3630f-973b-4b92-a377-64dec7a5675b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.466415 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf3630f-973b-4b92-a377-64dec7a5675b-kube-api-access-ckrb4" (OuterVolumeSpecName: "kube-api-access-ckrb4") pod "4bf3630f-973b-4b92-a377-64dec7a5675b" (UID: "4bf3630f-973b-4b92-a377-64dec7a5675b"). InnerVolumeSpecName "kube-api-access-ckrb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.523476 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bf3630f-973b-4b92-a377-64dec7a5675b" (UID: "4bf3630f-973b-4b92-a377-64dec7a5675b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.563645 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckrb4\" (UniqueName: \"kubernetes.io/projected/4bf3630f-973b-4b92-a377-64dec7a5675b-kube-api-access-ckrb4\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.563684 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.563694 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf3630f-973b-4b92-a377-64dec7a5675b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.865394 4704 generic.go:334] "Generic (PLEG): container finished" podID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerID="e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3" exitCode=0 Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.865444 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vn8r5" event={"ID":"4bf3630f-973b-4b92-a377-64dec7a5675b","Type":"ContainerDied","Data":"e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3"} Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.865452 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vn8r5" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.865480 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vn8r5" event={"ID":"4bf3630f-973b-4b92-a377-64dec7a5675b","Type":"ContainerDied","Data":"7d1c6cb4f75c1ca452faff3eed8490896f3b1afc2e04cc8c01e123563deaeee3"} Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.865499 4704 scope.go:117] "RemoveContainer" containerID="e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.889135 4704 scope.go:117] "RemoveContainer" containerID="78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.902694 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vn8r5"] Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.922663 4704 scope.go:117] "RemoveContainer" containerID="1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.942117 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vn8r5"] Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.945786 4704 scope.go:117] "RemoveContainer" containerID="e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3" Jan 22 16:44:02 crc kubenswrapper[4704]: E0122 16:44:02.946281 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3\": container with ID starting with e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3 not found: ID does not exist" containerID="e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.946320 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3"} err="failed to get container status \"e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3\": rpc error: code = NotFound desc = could not find container \"e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3\": container with ID starting with e004562d0c9fa5e0f04f7dd572015d64f3c3b051f7c8eba2bb4efab68db64fe3 not found: ID does not exist" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.946341 4704 scope.go:117] "RemoveContainer" containerID="78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b" Jan 22 16:44:02 crc kubenswrapper[4704]: E0122 16:44:02.946609 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b\": container with ID starting with 78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b not found: ID does not exist" containerID="78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.946717 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b"} err="failed to get container status \"78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b\": rpc error: code = NotFound desc = could not find container \"78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b\": container with ID starting with 78f48f9615c2b68f7a764e593dc7e27662384384458081276d317d8e2a37f68b not found: ID does not exist" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.946884 4704 scope.go:117] "RemoveContainer" containerID="1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da" Jan 22 16:44:02 crc kubenswrapper[4704]: E0122 16:44:02.949462 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da\": container with ID starting with 1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da not found: ID does not exist" containerID="1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da" Jan 22 16:44:02 crc kubenswrapper[4704]: I0122 16:44:02.949497 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da"} err="failed to get container status \"1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da\": rpc error: code = NotFound desc = could not find container \"1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da\": container with ID starting with 1aaa11e199dfe7c919253e3d824fa74282b14f8d0313ace8cab012d48cc741da not found: ID does not exist" Jan 22 16:44:03 crc kubenswrapper[4704]: I0122 16:44:03.641351 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" path="/var/lib/kubelet/pods/4bf3630f-973b-4b92-a377-64dec7a5675b/volumes" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.160431 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch"] Jan 22 16:44:07 crc kubenswrapper[4704]: E0122 16:44:07.161043 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="extract-content" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161060 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="extract-content" Jan 22 16:44:07 crc kubenswrapper[4704]: E0122 16:44:07.161077 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerName="extract" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161086 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerName="extract" Jan 22 16:44:07 crc kubenswrapper[4704]: E0122 16:44:07.161101 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="registry-server" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161110 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="registry-server" Jan 22 16:44:07 crc kubenswrapper[4704]: E0122 16:44:07.161128 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerName="util" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161135 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerName="util" Jan 22 16:44:07 crc kubenswrapper[4704]: E0122 16:44:07.161145 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="extract-utilities" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161154 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="extract-utilities" Jan 22 16:44:07 crc kubenswrapper[4704]: E0122 16:44:07.161169 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerName="pull" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161178 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerName="pull" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161314 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf3630f-973b-4b92-a377-64dec7a5675b" containerName="registry-server" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161337 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="365dc18e-4b90-48f3-9aa9-214fc97be804" containerName="extract" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.161867 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.165045 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.165222 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.165456 4704 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-cjtpr" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.193690 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch"] Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.326860 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8grc\" (UniqueName: \"kubernetes.io/projected/18c18159-250a-4631-a77f-8d49965c86d6-kube-api-access-l8grc\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fg6ch\" (UID: \"18c18159-250a-4631-a77f-8d49965c86d6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.326921 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c18159-250a-4631-a77f-8d49965c86d6-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fg6ch\" (UID: \"18c18159-250a-4631-a77f-8d49965c86d6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.428812 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8grc\" (UniqueName: \"kubernetes.io/projected/18c18159-250a-4631-a77f-8d49965c86d6-kube-api-access-l8grc\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fg6ch\" (UID: \"18c18159-250a-4631-a77f-8d49965c86d6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.428883 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c18159-250a-4631-a77f-8d49965c86d6-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fg6ch\" (UID: \"18c18159-250a-4631-a77f-8d49965c86d6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.429461 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18c18159-250a-4631-a77f-8d49965c86d6-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fg6ch\" (UID: \"18c18159-250a-4631-a77f-8d49965c86d6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.449939 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8grc\" (UniqueName: \"kubernetes.io/projected/18c18159-250a-4631-a77f-8d49965c86d6-kube-api-access-l8grc\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fg6ch\" (UID: \"18c18159-250a-4631-a77f-8d49965c86d6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.477773 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" Jan 22 16:44:07 crc kubenswrapper[4704]: I0122 16:44:07.935562 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch"] Jan 22 16:44:07 crc kubenswrapper[4704]: W0122 16:44:07.946926 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18c18159_250a_4631_a77f_8d49965c86d6.slice/crio-0f7bae852794043b44e74faeb55b6681705957631f7331f6309a119a0846d7ac WatchSource:0}: Error finding container 0f7bae852794043b44e74faeb55b6681705957631f7331f6309a119a0846d7ac: Status 404 returned error can't find the container with id 0f7bae852794043b44e74faeb55b6681705957631f7331f6309a119a0846d7ac Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.158448 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-92q6s"] Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.160143 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.171594 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92q6s"] Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.339247 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-utilities\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.339324 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-catalog-content\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.339423 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88pdc\" (UniqueName: \"kubernetes.io/projected/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-kube-api-access-88pdc\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.440032 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-utilities\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.440088 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-catalog-content\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.440180 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88pdc\" (UniqueName: \"kubernetes.io/projected/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-kube-api-access-88pdc\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.440742 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-catalog-content\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.440917 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-utilities\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.467733 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88pdc\" (UniqueName: \"kubernetes.io/projected/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-kube-api-access-88pdc\") pod \"community-operators-92q6s\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.478382 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.772428 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92q6s"] Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.944362 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92q6s" event={"ID":"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f","Type":"ContainerStarted","Data":"c57563fad482e747cf4338d4f98e09feae72a80f792d59f83ef7d7bb854173d3"} Jan 22 16:44:08 crc kubenswrapper[4704]: I0122 16:44:08.952443 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" event={"ID":"18c18159-250a-4631-a77f-8d49965c86d6","Type":"ContainerStarted","Data":"0f7bae852794043b44e74faeb55b6681705957631f7331f6309a119a0846d7ac"} Jan 22 16:44:09 crc kubenswrapper[4704]: I0122 16:44:09.961007 4704 generic.go:334] "Generic (PLEG): container finished" podID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerID="f76aaf346b8fd72a48c1bb5c91c450e0956e8f87d737d7d5fef87a2b1f56684c" exitCode=0 Jan 22 16:44:09 crc kubenswrapper[4704]: I0122 16:44:09.961050 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92q6s" event={"ID":"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f","Type":"ContainerDied","Data":"f76aaf346b8fd72a48c1bb5c91c450e0956e8f87d737d7d5fef87a2b1f56684c"} Jan 22 16:44:10 crc kubenswrapper[4704]: I0122 16:44:10.969924 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92q6s" event={"ID":"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f","Type":"ContainerStarted","Data":"e1216b0eee8f62e4c1a8b411af7bd8d50e21bb6d3f6f52f0b70c4241f5b7a880"} Jan 22 16:44:11 crc kubenswrapper[4704]: I0122 16:44:11.982623 4704 generic.go:334] "Generic (PLEG): container finished" podID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerID="e1216b0eee8f62e4c1a8b411af7bd8d50e21bb6d3f6f52f0b70c4241f5b7a880" exitCode=0 Jan 22 16:44:11 crc kubenswrapper[4704]: I0122 16:44:11.982732 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92q6s" event={"ID":"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f","Type":"ContainerDied","Data":"e1216b0eee8f62e4c1a8b411af7bd8d50e21bb6d3f6f52f0b70c4241f5b7a880"} Jan 22 16:44:12 crc kubenswrapper[4704]: I0122 16:44:12.993013 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92q6s" event={"ID":"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f","Type":"ContainerStarted","Data":"92d3d17f6c7a4a02276f76600f5ac3957ae29a21af537fda6c6fdb3a44b04b5d"} Jan 22 16:44:13 crc kubenswrapper[4704]: I0122 16:44:13.010355 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-92q6s" podStartSLOduration=2.403235354 podStartE2EDuration="5.010337908s" podCreationTimestamp="2026-01-22 16:44:08 +0000 UTC" firstStartedPulling="2026-01-22 16:44:09.963302826 +0000 UTC m=+942.607849526" lastFinishedPulling="2026-01-22 16:44:12.57040538 +0000 UTC m=+945.214952080" observedRunningTime="2026-01-22 16:44:13.008234557 +0000 UTC m=+945.652781277" watchObservedRunningTime="2026-01-22 16:44:13.010337908 +0000 UTC m=+945.654884608" Jan 22 16:44:18 crc kubenswrapper[4704]: I0122 16:44:18.479258 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:18 crc kubenswrapper[4704]: I0122 16:44:18.479676 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:18 crc kubenswrapper[4704]: I0122 16:44:18.544453 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:19 crc kubenswrapper[4704]: I0122 16:44:19.050982 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" event={"ID":"18c18159-250a-4631-a77f-8d49965c86d6","Type":"ContainerStarted","Data":"2801c1ed5288ab35cb2128c97ceea5dc9a47e0edb150658558f3bee45d854129"} Jan 22 16:44:19 crc kubenswrapper[4704]: I0122 16:44:19.079085 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fg6ch" podStartSLOduration=1.924781149 podStartE2EDuration="12.079061967s" podCreationTimestamp="2026-01-22 16:44:07 +0000 UTC" firstStartedPulling="2026-01-22 16:44:07.951483054 +0000 UTC m=+940.596029754" lastFinishedPulling="2026-01-22 16:44:18.105763882 +0000 UTC m=+950.750310572" observedRunningTime="2026-01-22 16:44:19.072877188 +0000 UTC m=+951.717423908" watchObservedRunningTime="2026-01-22 16:44:19.079061967 +0000 UTC m=+951.723608667" Jan 22 16:44:19 crc kubenswrapper[4704]: I0122 16:44:19.102371 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:19 crc kubenswrapper[4704]: I0122 16:44:19.166304 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92q6s"] Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.061840 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-92q6s" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="registry-server" containerID="cri-o://92d3d17f6c7a4a02276f76600f5ac3957ae29a21af537fda6c6fdb3a44b04b5d" gracePeriod=2 Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.821334 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-c72g6"] Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.822422 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.824202 4704 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5j2r9" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.824252 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.828713 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.839516 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-c72g6"] Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.846742 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a268e464-161c-413c-ac49-da3a0c827514-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-c72g6\" (UID: \"a268e464-161c-413c-ac49-da3a0c827514\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.846841 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46bw\" (UniqueName: \"kubernetes.io/projected/a268e464-161c-413c-ac49-da3a0c827514-kube-api-access-w46bw\") pod \"cert-manager-webhook-f4fb5df64-c72g6\" (UID: \"a268e464-161c-413c-ac49-da3a0c827514\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.948180 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a268e464-161c-413c-ac49-da3a0c827514-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-c72g6\" (UID: \"a268e464-161c-413c-ac49-da3a0c827514\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.948229 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46bw\" (UniqueName: \"kubernetes.io/projected/a268e464-161c-413c-ac49-da3a0c827514-kube-api-access-w46bw\") pod \"cert-manager-webhook-f4fb5df64-c72g6\" (UID: \"a268e464-161c-413c-ac49-da3a0c827514\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.975361 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a268e464-161c-413c-ac49-da3a0c827514-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-c72g6\" (UID: \"a268e464-161c-413c-ac49-da3a0c827514\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:21 crc kubenswrapper[4704]: I0122 16:44:21.979699 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46bw\" (UniqueName: \"kubernetes.io/projected/a268e464-161c-413c-ac49-da3a0c827514-kube-api-access-w46bw\") pod \"cert-manager-webhook-f4fb5df64-c72g6\" (UID: \"a268e464-161c-413c-ac49-da3a0c827514\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.082362 4704 generic.go:334] "Generic (PLEG): container finished" podID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerID="92d3d17f6c7a4a02276f76600f5ac3957ae29a21af537fda6c6fdb3a44b04b5d" exitCode=0 Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.082404 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92q6s" event={"ID":"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f","Type":"ContainerDied","Data":"92d3d17f6c7a4a02276f76600f5ac3957ae29a21af537fda6c6fdb3a44b04b5d"} Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.085716 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9"] Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.086472 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:22 crc kubenswrapper[4704]: W0122 16:44:22.088422 4704 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-5zrb2": failed to list *v1.Secret: secrets "cert-manager-cainjector-dockercfg-5zrb2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Jan 22 16:44:22 crc kubenswrapper[4704]: E0122 16:44:22.088478 4704 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-5zrb2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-cainjector-dockercfg-5zrb2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.137652 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.138288 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.146986 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9"] Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.159241 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88pdc\" (UniqueName: \"kubernetes.io/projected/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-kube-api-access-88pdc\") pod \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.159322 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-utilities\") pod \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.159416 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-catalog-content\") pod \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\" (UID: \"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f\") " Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.159647 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4b1b654-56be-40f7-9051-3a9cd248d3fa-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2kvx9\" (UID: \"f4b1b654-56be-40f7-9051-3a9cd248d3fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.159708 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svk4w\" (UniqueName: \"kubernetes.io/projected/f4b1b654-56be-40f7-9051-3a9cd248d3fa-kube-api-access-svk4w\") pod \"cert-manager-cainjector-855d9ccff4-2kvx9\" (UID: \"f4b1b654-56be-40f7-9051-3a9cd248d3fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.160638 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-utilities" (OuterVolumeSpecName: "utilities") pod "b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" (UID: "b428bcd3-48cf-4376-a9f5-d16c9d98cc0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.165963 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-kube-api-access-88pdc" (OuterVolumeSpecName: "kube-api-access-88pdc") pod "b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" (UID: "b428bcd3-48cf-4376-a9f5-d16c9d98cc0f"). InnerVolumeSpecName "kube-api-access-88pdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.245383 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" (UID: "b428bcd3-48cf-4376-a9f5-d16c9d98cc0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.262276 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4b1b654-56be-40f7-9051-3a9cd248d3fa-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2kvx9\" (UID: \"f4b1b654-56be-40f7-9051-3a9cd248d3fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.262517 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svk4w\" (UniqueName: \"kubernetes.io/projected/f4b1b654-56be-40f7-9051-3a9cd248d3fa-kube-api-access-svk4w\") pod \"cert-manager-cainjector-855d9ccff4-2kvx9\" (UID: \"f4b1b654-56be-40f7-9051-3a9cd248d3fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.263224 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88pdc\" (UniqueName: \"kubernetes.io/projected/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-kube-api-access-88pdc\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.263249 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.263263 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.310312 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f4b1b654-56be-40f7-9051-3a9cd248d3fa-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2kvx9\" (UID: \"f4b1b654-56be-40f7-9051-3a9cd248d3fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.310499 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svk4w\" (UniqueName: \"kubernetes.io/projected/f4b1b654-56be-40f7-9051-3a9cd248d3fa-kube-api-access-svk4w\") pod \"cert-manager-cainjector-855d9ccff4-2kvx9\" (UID: \"f4b1b654-56be-40f7-9051-3a9cd248d3fa\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.531406 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-c72g6"] Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.992617 4704 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5zrb2" Jan 22 16:44:22 crc kubenswrapper[4704]: I0122 16:44:22.998170 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.089288 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92q6s" event={"ID":"b428bcd3-48cf-4376-a9f5-d16c9d98cc0f","Type":"ContainerDied","Data":"c57563fad482e747cf4338d4f98e09feae72a80f792d59f83ef7d7bb854173d3"} Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.089347 4704 scope.go:117] "RemoveContainer" containerID="92d3d17f6c7a4a02276f76600f5ac3957ae29a21af537fda6c6fdb3a44b04b5d" Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.089350 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92q6s" Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.091047 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" event={"ID":"a268e464-161c-413c-ac49-da3a0c827514","Type":"ContainerStarted","Data":"c5c8b7b582a131ea5d95fb75e1f776e95582b919c97501a262858b618b22a8e3"} Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.114311 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92q6s"] Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.115752 4704 scope.go:117] "RemoveContainer" containerID="e1216b0eee8f62e4c1a8b411af7bd8d50e21bb6d3f6f52f0b70c4241f5b7a880" Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.118510 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-92q6s"] Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.152166 4704 scope.go:117] "RemoveContainer" containerID="f76aaf346b8fd72a48c1bb5c91c450e0956e8f87d737d7d5fef87a2b1f56684c" Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.382155 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9"] Jan 22 16:44:23 crc kubenswrapper[4704]: I0122 16:44:23.647475 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" path="/var/lib/kubelet/pods/b428bcd3-48cf-4376-a9f5-d16c9d98cc0f/volumes" Jan 22 16:44:24 crc kubenswrapper[4704]: I0122 16:44:24.098217 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" event={"ID":"f4b1b654-56be-40f7-9051-3a9cd248d3fa","Type":"ContainerStarted","Data":"b8cd974676edde915d5014fc2257e0a7b9059b1ba26e6aa3163a3809529a5602"} Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.157588 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" event={"ID":"a268e464-161c-413c-ac49-da3a0c827514","Type":"ContainerStarted","Data":"be77a090da322c869a535a2551b2ecf84fec99956fda384aba1ab6806608b00b"} Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.158295 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.162517 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" event={"ID":"f4b1b654-56be-40f7-9051-3a9cd248d3fa","Type":"ContainerStarted","Data":"e57b4d0ae76da52674873a40ed76d42b6178a64caebbbaf892e546af3f4785d5"} Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.184038 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" podStartSLOduration=2.586040587 podStartE2EDuration="11.184009253s" podCreationTimestamp="2026-01-22 16:44:21 +0000 UTC" firstStartedPulling="2026-01-22 16:44:22.537102938 +0000 UTC m=+955.181649638" lastFinishedPulling="2026-01-22 16:44:31.135071604 +0000 UTC m=+963.779618304" observedRunningTime="2026-01-22 16:44:32.178476393 +0000 UTC m=+964.823023133" watchObservedRunningTime="2026-01-22 16:44:32.184009253 +0000 UTC m=+964.828555993" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.208392 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2kvx9" podStartSLOduration=2.448624992 podStartE2EDuration="10.20837238s" podCreationTimestamp="2026-01-22 16:44:22 +0000 UTC" firstStartedPulling="2026-01-22 16:44:23.39474954 +0000 UTC m=+956.039296240" lastFinishedPulling="2026-01-22 16:44:31.154496908 +0000 UTC m=+963.799043628" observedRunningTime="2026-01-22 16:44:32.204672512 +0000 UTC m=+964.849219223" watchObservedRunningTime="2026-01-22 16:44:32.20837238 +0000 UTC m=+964.852919080" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.255718 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-6zj75"] Jan 22 16:44:32 crc kubenswrapper[4704]: E0122 16:44:32.256106 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="registry-server" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.256125 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="registry-server" Jan 22 16:44:32 crc kubenswrapper[4704]: E0122 16:44:32.256145 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="extract-utilities" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.256154 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="extract-utilities" Jan 22 16:44:32 crc kubenswrapper[4704]: E0122 16:44:32.256170 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="extract-content" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.256178 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="extract-content" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.256321 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="b428bcd3-48cf-4376-a9f5-d16c9d98cc0f" containerName="registry-server" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.256895 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.261336 4704 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rfs7g" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.263836 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-6zj75"] Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.346070 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/571d56d1-f2fc-41ab-aff3-d5ae31849f8e-bound-sa-token\") pod \"cert-manager-86cb77c54b-6zj75\" (UID: \"571d56d1-f2fc-41ab-aff3-d5ae31849f8e\") " pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.346151 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbhs\" (UniqueName: \"kubernetes.io/projected/571d56d1-f2fc-41ab-aff3-d5ae31849f8e-kube-api-access-nsbhs\") pod \"cert-manager-86cb77c54b-6zj75\" (UID: \"571d56d1-f2fc-41ab-aff3-d5ae31849f8e\") " pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.447028 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsbhs\" (UniqueName: \"kubernetes.io/projected/571d56d1-f2fc-41ab-aff3-d5ae31849f8e-kube-api-access-nsbhs\") pod \"cert-manager-86cb77c54b-6zj75\" (UID: \"571d56d1-f2fc-41ab-aff3-d5ae31849f8e\") " pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.447144 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/571d56d1-f2fc-41ab-aff3-d5ae31849f8e-bound-sa-token\") pod \"cert-manager-86cb77c54b-6zj75\" (UID: \"571d56d1-f2fc-41ab-aff3-d5ae31849f8e\") " pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.468211 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsbhs\" (UniqueName: \"kubernetes.io/projected/571d56d1-f2fc-41ab-aff3-d5ae31849f8e-kube-api-access-nsbhs\") pod \"cert-manager-86cb77c54b-6zj75\" (UID: \"571d56d1-f2fc-41ab-aff3-d5ae31849f8e\") " pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.478229 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/571d56d1-f2fc-41ab-aff3-d5ae31849f8e-bound-sa-token\") pod \"cert-manager-86cb77c54b-6zj75\" (UID: \"571d56d1-f2fc-41ab-aff3-d5ae31849f8e\") " pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:32 crc kubenswrapper[4704]: I0122 16:44:32.574384 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-6zj75" Jan 22 16:44:33 crc kubenswrapper[4704]: I0122 16:44:33.034365 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-6zj75"] Jan 22 16:44:33 crc kubenswrapper[4704]: I0122 16:44:33.169293 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-6zj75" event={"ID":"571d56d1-f2fc-41ab-aff3-d5ae31849f8e","Type":"ContainerStarted","Data":"e88733f10265b633d821e9341a585d79940a13716fabc3c9782a65f2b5e692a4"} Jan 22 16:44:33 crc kubenswrapper[4704]: I0122 16:44:33.169658 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-6zj75" event={"ID":"571d56d1-f2fc-41ab-aff3-d5ae31849f8e","Type":"ContainerStarted","Data":"b33ef1340adc5963f1de19f851742719718e6f72082d093702653bea821b107d"} Jan 22 16:44:33 crc kubenswrapper[4704]: I0122 16:44:33.181953 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-6zj75" podStartSLOduration=1.181930962 podStartE2EDuration="1.181930962s" podCreationTimestamp="2026-01-22 16:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:44:33.180606913 +0000 UTC m=+965.825153623" watchObservedRunningTime="2026-01-22 16:44:33.181930962 +0000 UTC m=+965.826477662" Jan 22 16:44:37 crc kubenswrapper[4704]: I0122 16:44:37.141831 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-c72g6" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.351164 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mzkrc"] Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.352935 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.354772 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zwb4j" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.355235 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.355477 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.373983 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mzkrc"] Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.450361 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk254\" (UniqueName: \"kubernetes.io/projected/3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914-kube-api-access-zk254\") pod \"openstack-operator-index-mzkrc\" (UID: \"3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914\") " pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.551543 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk254\" (UniqueName: \"kubernetes.io/projected/3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914-kube-api-access-zk254\") pod \"openstack-operator-index-mzkrc\" (UID: \"3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914\") " pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.574616 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk254\" (UniqueName: \"kubernetes.io/projected/3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914-kube-api-access-zk254\") pod \"openstack-operator-index-mzkrc\" (UID: \"3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914\") " pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:40 crc kubenswrapper[4704]: I0122 16:44:40.714016 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:41 crc kubenswrapper[4704]: I0122 16:44:41.158240 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mzkrc"] Jan 22 16:44:41 crc kubenswrapper[4704]: I0122 16:44:41.222732 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzkrc" event={"ID":"3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914","Type":"ContainerStarted","Data":"e28dc04d97150996664285c1f0b7f532f831e307822519abcb008dae3cda047e"} Jan 22 16:44:45 crc kubenswrapper[4704]: I0122 16:44:45.255400 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzkrc" event={"ID":"3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914","Type":"ContainerStarted","Data":"b474528f318bc25cfeaa872ae8f3976bfcc3336db86dee2f5d0e33e6792a5014"} Jan 22 16:44:45 crc kubenswrapper[4704]: I0122 16:44:45.267579 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mzkrc" podStartSLOduration=1.6662499830000002 podStartE2EDuration="5.267557998s" podCreationTimestamp="2026-01-22 16:44:40 +0000 UTC" firstStartedPulling="2026-01-22 16:44:41.167679844 +0000 UTC m=+973.812226564" lastFinishedPulling="2026-01-22 16:44:44.768987879 +0000 UTC m=+977.413534579" observedRunningTime="2026-01-22 16:44:45.265899849 +0000 UTC m=+977.910446539" watchObservedRunningTime="2026-01-22 16:44:45.267557998 +0000 UTC m=+977.912104718" Jan 22 16:44:50 crc kubenswrapper[4704]: I0122 16:44:50.714153 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:50 crc kubenswrapper[4704]: I0122 16:44:50.714238 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:50 crc kubenswrapper[4704]: I0122 16:44:50.742757 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:51 crc kubenswrapper[4704]: I0122 16:44:51.322715 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-mzkrc" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.577071 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn"] Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.580426 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.582540 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-h9zdt" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.585198 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn"] Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.728856 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-util\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.728964 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpsbr\" (UniqueName: \"kubernetes.io/projected/257eeca9-b568-4dba-8647-c37428c6f7b9-kube-api-access-dpsbr\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.729158 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-bundle\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.830564 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-bundle\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.830691 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-util\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.830776 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpsbr\" (UniqueName: \"kubernetes.io/projected/257eeca9-b568-4dba-8647-c37428c6f7b9-kube-api-access-dpsbr\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.831157 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-bundle\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.831435 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-util\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.854485 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpsbr\" (UniqueName: \"kubernetes.io/projected/257eeca9-b568-4dba-8647-c37428c6f7b9-kube-api-access-dpsbr\") pod \"e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:56 crc kubenswrapper[4704]: I0122 16:44:56.945456 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:44:57 crc kubenswrapper[4704]: I0122 16:44:57.376255 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn"] Jan 22 16:44:58 crc kubenswrapper[4704]: I0122 16:44:58.352165 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" event={"ID":"257eeca9-b568-4dba-8647-c37428c6f7b9","Type":"ContainerStarted","Data":"28b3ac81533a714e8d6319da1890eb33d6ffeca6982948ad8e7e4d77ccc5ce0f"} Jan 22 16:44:59 crc kubenswrapper[4704]: I0122 16:44:59.361671 4704 generic.go:334] "Generic (PLEG): container finished" podID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerID="6d1c687589f175fa7f9d41859a0f2b47823b56b5099bcd5b1f66bac9318606c9" exitCode=0 Jan 22 16:44:59 crc kubenswrapper[4704]: I0122 16:44:59.361709 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" event={"ID":"257eeca9-b568-4dba-8647-c37428c6f7b9","Type":"ContainerDied","Data":"6d1c687589f175fa7f9d41859a0f2b47823b56b5099bcd5b1f66bac9318606c9"} Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.165447 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg"] Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.167094 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.169603 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.169646 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.179330 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg"] Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.279831 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5506271-a913-4b03-8cec-ba92cbb1e462-secret-volume\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.280069 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5506271-a913-4b03-8cec-ba92cbb1e462-config-volume\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.280186 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4vh\" (UniqueName: \"kubernetes.io/projected/f5506271-a913-4b03-8cec-ba92cbb1e462-kube-api-access-kb4vh\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.369871 4704 generic.go:334] "Generic (PLEG): container finished" podID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerID="f0596e3cb6204ab1552569f7d344228fad4a688e5a5ca6207a5b4ee1d2df62c4" exitCode=0 Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.369923 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" event={"ID":"257eeca9-b568-4dba-8647-c37428c6f7b9","Type":"ContainerDied","Data":"f0596e3cb6204ab1552569f7d344228fad4a688e5a5ca6207a5b4ee1d2df62c4"} Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.382252 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5506271-a913-4b03-8cec-ba92cbb1e462-config-volume\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.382335 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb4vh\" (UniqueName: \"kubernetes.io/projected/f5506271-a913-4b03-8cec-ba92cbb1e462-kube-api-access-kb4vh\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.382421 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5506271-a913-4b03-8cec-ba92cbb1e462-secret-volume\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.384498 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5506271-a913-4b03-8cec-ba92cbb1e462-config-volume\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.393400 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5506271-a913-4b03-8cec-ba92cbb1e462-secret-volume\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.402978 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb4vh\" (UniqueName: \"kubernetes.io/projected/f5506271-a913-4b03-8cec-ba92cbb1e462-kube-api-access-kb4vh\") pod \"collect-profiles-29485005-pbbwg\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.509141 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:00 crc kubenswrapper[4704]: I0122 16:45:00.692081 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg"] Jan 22 16:45:00 crc kubenswrapper[4704]: W0122 16:45:00.697273 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5506271_a913_4b03_8cec_ba92cbb1e462.slice/crio-6e62a49879959e62bbe072c1b07325f0ceb4f744e22db990d59e92d26a2602fa WatchSource:0}: Error finding container 6e62a49879959e62bbe072c1b07325f0ceb4f744e22db990d59e92d26a2602fa: Status 404 returned error can't find the container with id 6e62a49879959e62bbe072c1b07325f0ceb4f744e22db990d59e92d26a2602fa Jan 22 16:45:01 crc kubenswrapper[4704]: I0122 16:45:01.382040 4704 generic.go:334] "Generic (PLEG): container finished" podID="f5506271-a913-4b03-8cec-ba92cbb1e462" containerID="223cd6f978a54b7b47f37433c36de9534d6590a8f743f3ed472feb22c83e6de8" exitCode=0 Jan 22 16:45:01 crc kubenswrapper[4704]: I0122 16:45:01.382143 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" event={"ID":"f5506271-a913-4b03-8cec-ba92cbb1e462","Type":"ContainerDied","Data":"223cd6f978a54b7b47f37433c36de9534d6590a8f743f3ed472feb22c83e6de8"} Jan 22 16:45:01 crc kubenswrapper[4704]: I0122 16:45:01.382371 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" event={"ID":"f5506271-a913-4b03-8cec-ba92cbb1e462","Type":"ContainerStarted","Data":"6e62a49879959e62bbe072c1b07325f0ceb4f744e22db990d59e92d26a2602fa"} Jan 22 16:45:01 crc kubenswrapper[4704]: I0122 16:45:01.386908 4704 generic.go:334] "Generic (PLEG): container finished" podID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerID="dcd177f7a1e9445dd5942c38bfc2fd412c97d7da658c85ebc11e5096431a93d9" exitCode=0 Jan 22 16:45:01 crc kubenswrapper[4704]: I0122 16:45:01.386951 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" event={"ID":"257eeca9-b568-4dba-8647-c37428c6f7b9","Type":"ContainerDied","Data":"dcd177f7a1e9445dd5942c38bfc2fd412c97d7da658c85ebc11e5096431a93d9"} Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.771646 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.779153 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.927517 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5506271-a913-4b03-8cec-ba92cbb1e462-config-volume\") pod \"f5506271-a913-4b03-8cec-ba92cbb1e462\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.927610 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-bundle\") pod \"257eeca9-b568-4dba-8647-c37428c6f7b9\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.927685 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-util\") pod \"257eeca9-b568-4dba-8647-c37428c6f7b9\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.927743 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5506271-a913-4b03-8cec-ba92cbb1e462-secret-volume\") pod \"f5506271-a913-4b03-8cec-ba92cbb1e462\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.927770 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpsbr\" (UniqueName: \"kubernetes.io/projected/257eeca9-b568-4dba-8647-c37428c6f7b9-kube-api-access-dpsbr\") pod \"257eeca9-b568-4dba-8647-c37428c6f7b9\" (UID: \"257eeca9-b568-4dba-8647-c37428c6f7b9\") " Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.927822 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4vh\" (UniqueName: \"kubernetes.io/projected/f5506271-a913-4b03-8cec-ba92cbb1e462-kube-api-access-kb4vh\") pod \"f5506271-a913-4b03-8cec-ba92cbb1e462\" (UID: \"f5506271-a913-4b03-8cec-ba92cbb1e462\") " Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.928338 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5506271-a913-4b03-8cec-ba92cbb1e462-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5506271-a913-4b03-8cec-ba92cbb1e462" (UID: "f5506271-a913-4b03-8cec-ba92cbb1e462"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.928678 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-bundle" (OuterVolumeSpecName: "bundle") pod "257eeca9-b568-4dba-8647-c37428c6f7b9" (UID: "257eeca9-b568-4dba-8647-c37428c6f7b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.928916 4704 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5506271-a913-4b03-8cec-ba92cbb1e462-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.928944 4704 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:02 crc kubenswrapper[4704]: I0122 16:45:02.967540 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-util" (OuterVolumeSpecName: "util") pod "257eeca9-b568-4dba-8647-c37428c6f7b9" (UID: "257eeca9-b568-4dba-8647-c37428c6f7b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.029966 4704 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/257eeca9-b568-4dba-8647-c37428c6f7b9-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.390452 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5506271-a913-4b03-8cec-ba92cbb1e462-kube-api-access-kb4vh" (OuterVolumeSpecName: "kube-api-access-kb4vh") pod "f5506271-a913-4b03-8cec-ba92cbb1e462" (UID: "f5506271-a913-4b03-8cec-ba92cbb1e462"). InnerVolumeSpecName "kube-api-access-kb4vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.390481 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257eeca9-b568-4dba-8647-c37428c6f7b9-kube-api-access-dpsbr" (OuterVolumeSpecName: "kube-api-access-dpsbr") pod "257eeca9-b568-4dba-8647-c37428c6f7b9" (UID: "257eeca9-b568-4dba-8647-c37428c6f7b9"). InnerVolumeSpecName "kube-api-access-dpsbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.393523 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5506271-a913-4b03-8cec-ba92cbb1e462-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5506271-a913-4b03-8cec-ba92cbb1e462" (UID: "f5506271-a913-4b03-8cec-ba92cbb1e462"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.408873 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" event={"ID":"257eeca9-b568-4dba-8647-c37428c6f7b9","Type":"ContainerDied","Data":"28b3ac81533a714e8d6319da1890eb33d6ffeca6982948ad8e7e4d77ccc5ce0f"} Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.408907 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.408938 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28b3ac81533a714e8d6319da1890eb33d6ffeca6982948ad8e7e4d77ccc5ce0f" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.410750 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" event={"ID":"f5506271-a913-4b03-8cec-ba92cbb1e462","Type":"ContainerDied","Data":"6e62a49879959e62bbe072c1b07325f0ceb4f744e22db990d59e92d26a2602fa"} Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.410781 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e62a49879959e62bbe072c1b07325f0ceb4f744e22db990d59e92d26a2602fa" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.410827 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485005-pbbwg" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.436300 4704 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5506271-a913-4b03-8cec-ba92cbb1e462-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.436341 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpsbr\" (UniqueName: \"kubernetes.io/projected/257eeca9-b568-4dba-8647-c37428c6f7b9-kube-api-access-dpsbr\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:03 crc kubenswrapper[4704]: I0122 16:45:03.436356 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb4vh\" (UniqueName: \"kubernetes.io/projected/f5506271-a913-4b03-8cec-ba92cbb1e462-kube-api-access-kb4vh\") on node \"crc\" DevicePath \"\"" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.955511 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq"] Jan 22 16:45:09 crc kubenswrapper[4704]: E0122 16:45:09.956438 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerName="pull" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.956456 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerName="pull" Jan 22 16:45:09 crc kubenswrapper[4704]: E0122 16:45:09.956472 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerName="util" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.956479 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerName="util" Jan 22 16:45:09 crc kubenswrapper[4704]: E0122 16:45:09.956496 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5506271-a913-4b03-8cec-ba92cbb1e462" containerName="collect-profiles" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.956504 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5506271-a913-4b03-8cec-ba92cbb1e462" containerName="collect-profiles" Jan 22 16:45:09 crc kubenswrapper[4704]: E0122 16:45:09.956522 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerName="extract" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.956529 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerName="extract" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.957176 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5506271-a913-4b03-8cec-ba92cbb1e462" containerName="collect-profiles" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.957210 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="257eeca9-b568-4dba-8647-c37428c6f7b9" containerName="extract" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.958465 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.962406 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-fl4qx" Jan 22 16:45:09 crc kubenswrapper[4704]: I0122 16:45:09.996214 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq"] Jan 22 16:45:10 crc kubenswrapper[4704]: I0122 16:45:10.134091 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d47x\" (UniqueName: \"kubernetes.io/projected/59835123-6708-4c93-96da-82bcddc141c7-kube-api-access-2d47x\") pod \"openstack-operator-controller-init-b7565899b-9x9fq\" (UID: \"59835123-6708-4c93-96da-82bcddc141c7\") " pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:45:10 crc kubenswrapper[4704]: I0122 16:45:10.236051 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d47x\" (UniqueName: \"kubernetes.io/projected/59835123-6708-4c93-96da-82bcddc141c7-kube-api-access-2d47x\") pod \"openstack-operator-controller-init-b7565899b-9x9fq\" (UID: \"59835123-6708-4c93-96da-82bcddc141c7\") " pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:45:10 crc kubenswrapper[4704]: I0122 16:45:10.267419 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d47x\" (UniqueName: \"kubernetes.io/projected/59835123-6708-4c93-96da-82bcddc141c7-kube-api-access-2d47x\") pod \"openstack-operator-controller-init-b7565899b-9x9fq\" (UID: \"59835123-6708-4c93-96da-82bcddc141c7\") " pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:45:10 crc kubenswrapper[4704]: I0122 16:45:10.292816 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:45:10 crc kubenswrapper[4704]: I0122 16:45:10.598560 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq"] Jan 22 16:45:11 crc kubenswrapper[4704]: I0122 16:45:11.476234 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" event={"ID":"59835123-6708-4c93-96da-82bcddc141c7","Type":"ContainerStarted","Data":"c25df24cac52407aa5842fb92a02ee6636d7e059dfb84d5ee0cf9213a9170979"} Jan 22 16:45:16 crc kubenswrapper[4704]: I0122 16:45:16.516304 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" event={"ID":"59835123-6708-4c93-96da-82bcddc141c7","Type":"ContainerStarted","Data":"c7cdc49cdd3619c24c3c2ccf564c13646c7cbd27482001fcedfbbe4b76e98fed"} Jan 22 16:45:16 crc kubenswrapper[4704]: I0122 16:45:16.516919 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:45:16 crc kubenswrapper[4704]: I0122 16:45:16.545420 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" podStartSLOduration=2.2706141300000002 podStartE2EDuration="7.545398405s" podCreationTimestamp="2026-01-22 16:45:09 +0000 UTC" firstStartedPulling="2026-01-22 16:45:10.610621851 +0000 UTC m=+1003.255168551" lastFinishedPulling="2026-01-22 16:45:15.885406116 +0000 UTC m=+1008.529952826" observedRunningTime="2026-01-22 16:45:16.541480322 +0000 UTC m=+1009.186027012" watchObservedRunningTime="2026-01-22 16:45:16.545398405 +0000 UTC m=+1009.189945105" Jan 22 16:45:30 crc kubenswrapper[4704]: I0122 16:45:30.296531 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.086959 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.087694 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.725959 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.726924 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.731155 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.731871 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.737440 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7rfhw" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.737440 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kcj29" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.802553 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hczt\" (UniqueName: \"kubernetes.io/projected/068092e4-bd7d-4f6f-8806-b794f3dbf696-kube-api-access-4hczt\") pod \"cinder-operator-controller-manager-69cf5d4557-hd8tx\" (UID: \"068092e4-bd7d-4f6f-8806-b794f3dbf696\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.802593 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgjc\" (UniqueName: \"kubernetes.io/projected/170b5a59-8ffd-47a8-b2b9-a0f48167050d-kube-api-access-rkgjc\") pod \"barbican-operator-controller-manager-59dd8b7cbf-g4q7s\" (UID: \"170b5a59-8ffd-47a8-b2b9-a0f48167050d\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.808462 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.811700 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.812828 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.817517 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-lsqmw" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.820055 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.823627 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.824706 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.827551 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.828561 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.829110 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ckfmj" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.830697 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cf8md" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.833482 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.840030 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.861782 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.867994 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.873933 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tdq55" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.891021 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.891897 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.894297 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mgc6x" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.897490 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.903535 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hczt\" (UniqueName: \"kubernetes.io/projected/068092e4-bd7d-4f6f-8806-b794f3dbf696-kube-api-access-4hczt\") pod \"cinder-operator-controller-manager-69cf5d4557-hd8tx\" (UID: \"068092e4-bd7d-4f6f-8806-b794f3dbf696\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.904083 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgjc\" (UniqueName: \"kubernetes.io/projected/170b5a59-8ffd-47a8-b2b9-a0f48167050d-kube-api-access-rkgjc\") pod \"barbican-operator-controller-manager-59dd8b7cbf-g4q7s\" (UID: \"170b5a59-8ffd-47a8-b2b9-a0f48167050d\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.948488 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.948541 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgjc\" (UniqueName: \"kubernetes.io/projected/170b5a59-8ffd-47a8-b2b9-a0f48167050d-kube-api-access-rkgjc\") pod \"barbican-operator-controller-manager-59dd8b7cbf-g4q7s\" (UID: \"170b5a59-8ffd-47a8-b2b9-a0f48167050d\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.955760 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hczt\" (UniqueName: \"kubernetes.io/projected/068092e4-bd7d-4f6f-8806-b794f3dbf696-kube-api-access-4hczt\") pod \"cinder-operator-controller-manager-69cf5d4557-hd8tx\" (UID: \"068092e4-bd7d-4f6f-8806-b794f3dbf696\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.973865 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.992158 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz"] Jan 22 16:45:49 crc kubenswrapper[4704]: I0122 16:45:49.993270 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.012938 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4gl2r" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.015130 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7tv6\" (UniqueName: \"kubernetes.io/projected/f166ae0f-3591-4099-bd69-62ec09ba977a-kube-api-access-p7tv6\") pod \"horizon-operator-controller-manager-77d5c5b54f-5w58r\" (UID: \"f166ae0f-3591-4099-bd69-62ec09ba977a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.015185 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4n55\" (UniqueName: \"kubernetes.io/projected/dd19d6a3-d166-41b8-ac16-76d87c51cad5-kube-api-access-r4n55\") pod \"ironic-operator-controller-manager-69d6c9f5b8-nggqz\" (UID: \"dd19d6a3-d166-41b8-ac16-76d87c51cad5\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.015219 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrj5\" (UniqueName: \"kubernetes.io/projected/50d3d899-4725-4b05-8dc8-84152766e963-kube-api-access-jnrj5\") pod \"designate-operator-controller-manager-b45d7bf98-4hzcj\" (UID: \"50d3d899-4725-4b05-8dc8-84152766e963\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.015277 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzfkd\" (UniqueName: \"kubernetes.io/projected/3c79bdf7-d523-40e2-8539-f28025e1a92f-kube-api-access-rzfkd\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.015310 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.015336 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25sl\" (UniqueName: \"kubernetes.io/projected/7974da72-060f-48cb-b06e-7fae3ecd377d-kube-api-access-h25sl\") pod \"heat-operator-controller-manager-594c8c9d5d-ggdqg\" (UID: \"7974da72-060f-48cb-b06e-7fae3ecd377d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.015366 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnmk4\" (UniqueName: \"kubernetes.io/projected/17fe4464-7b64-4efe-b95b-89834259fc79-kube-api-access-pnmk4\") pod \"glance-operator-controller-manager-78fdd796fd-nm6c8\" (UID: \"17fe4464-7b64-4efe-b95b-89834259fc79\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.036861 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.037996 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.043254 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-d5d4q" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.052134 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.057343 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.104916 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.117206 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4n55\" (UniqueName: \"kubernetes.io/projected/dd19d6a3-d166-41b8-ac16-76d87c51cad5-kube-api-access-r4n55\") pod \"ironic-operator-controller-manager-69d6c9f5b8-nggqz\" (UID: \"dd19d6a3-d166-41b8-ac16-76d87c51cad5\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.117249 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrj5\" (UniqueName: \"kubernetes.io/projected/50d3d899-4725-4b05-8dc8-84152766e963-kube-api-access-jnrj5\") pod \"designate-operator-controller-manager-b45d7bf98-4hzcj\" (UID: \"50d3d899-4725-4b05-8dc8-84152766e963\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.117298 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzfkd\" (UniqueName: \"kubernetes.io/projected/3c79bdf7-d523-40e2-8539-f28025e1a92f-kube-api-access-rzfkd\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.117332 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.117353 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25sl\" (UniqueName: \"kubernetes.io/projected/7974da72-060f-48cb-b06e-7fae3ecd377d-kube-api-access-h25sl\") pod \"heat-operator-controller-manager-594c8c9d5d-ggdqg\" (UID: \"7974da72-060f-48cb-b06e-7fae3ecd377d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.117376 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnmk4\" (UniqueName: \"kubernetes.io/projected/17fe4464-7b64-4efe-b95b-89834259fc79-kube-api-access-pnmk4\") pod \"glance-operator-controller-manager-78fdd796fd-nm6c8\" (UID: \"17fe4464-7b64-4efe-b95b-89834259fc79\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.117413 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7tv6\" (UniqueName: \"kubernetes.io/projected/f166ae0f-3591-4099-bd69-62ec09ba977a-kube-api-access-p7tv6\") pod \"horizon-operator-controller-manager-77d5c5b54f-5w58r\" (UID: \"f166ae0f-3591-4099-bd69-62ec09ba977a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.118441 4704 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.118480 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert podName:3c79bdf7-d523-40e2-8539-f28025e1a92f nodeName:}" failed. No retries permitted until 2026-01-22 16:45:50.618466916 +0000 UTC m=+1043.263013616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert") pod "infra-operator-controller-manager-54ccf4f85d-77kz5" (UID: "3c79bdf7-d523-40e2-8539-f28025e1a92f") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.141507 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7tv6\" (UniqueName: \"kubernetes.io/projected/f166ae0f-3591-4099-bd69-62ec09ba977a-kube-api-access-p7tv6\") pod \"horizon-operator-controller-manager-77d5c5b54f-5w58r\" (UID: \"f166ae0f-3591-4099-bd69-62ec09ba977a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.148895 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.186214 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzfkd\" (UniqueName: \"kubernetes.io/projected/3c79bdf7-d523-40e2-8539-f28025e1a92f-kube-api-access-rzfkd\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.186841 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4n55\" (UniqueName: \"kubernetes.io/projected/dd19d6a3-d166-41b8-ac16-76d87c51cad5-kube-api-access-r4n55\") pod \"ironic-operator-controller-manager-69d6c9f5b8-nggqz\" (UID: \"dd19d6a3-d166-41b8-ac16-76d87c51cad5\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.187460 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25sl\" (UniqueName: \"kubernetes.io/projected/7974da72-060f-48cb-b06e-7fae3ecd377d-kube-api-access-h25sl\") pod \"heat-operator-controller-manager-594c8c9d5d-ggdqg\" (UID: \"7974da72-060f-48cb-b06e-7fae3ecd377d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.199744 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrj5\" (UniqueName: \"kubernetes.io/projected/50d3d899-4725-4b05-8dc8-84152766e963-kube-api-access-jnrj5\") pod \"designate-operator-controller-manager-b45d7bf98-4hzcj\" (UID: \"50d3d899-4725-4b05-8dc8-84152766e963\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.199975 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.200393 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnmk4\" (UniqueName: \"kubernetes.io/projected/17fe4464-7b64-4efe-b95b-89834259fc79-kube-api-access-pnmk4\") pod \"glance-operator-controller-manager-78fdd796fd-nm6c8\" (UID: \"17fe4464-7b64-4efe-b95b-89834259fc79\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.207967 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.212668 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.229146 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4h9\" (UniqueName: \"kubernetes.io/projected/b5539e8b-5116-4c16-9b27-6b5958450759-kube-api-access-nr4h9\") pod \"keystone-operator-controller-manager-b8b6d4659-b6xnp\" (UID: \"b5539e8b-5116-4c16-9b27-6b5958450759\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.231424 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.241329 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-m8psm" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.298134 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.312102 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.324448 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.329982 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.331191 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4h9\" (UniqueName: \"kubernetes.io/projected/b5539e8b-5116-4c16-9b27-6b5958450759-kube-api-access-nr4h9\") pod \"keystone-operator-controller-manager-b8b6d4659-b6xnp\" (UID: \"b5539e8b-5116-4c16-9b27-6b5958450759\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.331221 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrgt\" (UniqueName: \"kubernetes.io/projected/8be9d1b7-ad8a-41b0-a578-e26baafcf932-kube-api-access-cgrgt\") pod \"manila-operator-controller-manager-78c6999f6f-7jps5\" (UID: \"8be9d1b7-ad8a-41b0-a578-e26baafcf932\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.334696 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rds5l" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.346500 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.356495 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.364053 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4h9\" (UniqueName: \"kubernetes.io/projected/b5539e8b-5116-4c16-9b27-6b5958450759-kube-api-access-nr4h9\") pod \"keystone-operator-controller-manager-b8b6d4659-b6xnp\" (UID: \"b5539e8b-5116-4c16-9b27-6b5958450759\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.369274 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.370164 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.372781 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-svks9" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.377717 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.408966 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.425811 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.427423 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.430192 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-pf2k4" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.431686 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.432117 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.433175 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.434183 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ds9\" (UniqueName: \"kubernetes.io/projected/48e30eae-1a73-45ab-8ce9-0e64d820d7d6-kube-api-access-m2ds9\") pod \"nova-operator-controller-manager-6b8bc8d87d-s59f7\" (UID: \"48e30eae-1a73-45ab-8ce9-0e64d820d7d6\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.434234 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9sl\" (UniqueName: \"kubernetes.io/projected/9cbde52d-972f-41dc-b9b0-6cd275d013a8-kube-api-access-bk9sl\") pod \"neutron-operator-controller-manager-5d8f59fb49-rkxpv\" (UID: \"9cbde52d-972f-41dc-b9b0-6cd275d013a8\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.434267 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppgvr\" (UniqueName: \"kubernetes.io/projected/52786693-8d66-4a9d-aff2-b6d4b7c260be-kube-api-access-ppgvr\") pod \"mariadb-operator-controller-manager-c87fff755-txdkv\" (UID: \"52786693-8d66-4a9d-aff2-b6d4b7c260be\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.434328 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgrgt\" (UniqueName: \"kubernetes.io/projected/8be9d1b7-ad8a-41b0-a578-e26baafcf932-kube-api-access-cgrgt\") pod \"manila-operator-controller-manager-78c6999f6f-7jps5\" (UID: \"8be9d1b7-ad8a-41b0-a578-e26baafcf932\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.440291 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-c8h4k" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.458904 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.473483 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.478445 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgrgt\" (UniqueName: \"kubernetes.io/projected/8be9d1b7-ad8a-41b0-a578-e26baafcf932-kube-api-access-cgrgt\") pod \"manila-operator-controller-manager-78c6999f6f-7jps5\" (UID: \"8be9d1b7-ad8a-41b0-a578-e26baafcf932\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.490773 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.495633 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.496582 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.500690 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.506338 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.507457 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.510014 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wwjgm" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.510146 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.513653 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-npcc8" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.522864 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.523984 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.527088 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7vcfd" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.529390 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.530483 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.533401 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-n4fnv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.540754 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrww\" (UniqueName: \"kubernetes.io/projected/8ab35638-b730-42d8-ab86-d7573f3b5083-kube-api-access-lrrww\") pod \"octavia-operator-controller-manager-7bd9774b6-pmcms\" (UID: \"8ab35638-b730-42d8-ab86-d7573f3b5083\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.540827 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ds9\" (UniqueName: \"kubernetes.io/projected/48e30eae-1a73-45ab-8ce9-0e64d820d7d6-kube-api-access-m2ds9\") pod \"nova-operator-controller-manager-6b8bc8d87d-s59f7\" (UID: \"48e30eae-1a73-45ab-8ce9-0e64d820d7d6\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.541043 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7g94\" (UniqueName: \"kubernetes.io/projected/d7747ccf-7f71-46a7-86b2-782561d8c41c-kube-api-access-w7g94\") pod \"ovn-operator-controller-manager-55db956ddc-2ntql\" (UID: \"d7747ccf-7f71-46a7-86b2-782561d8c41c\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.541071 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ggc\" (UniqueName: \"kubernetes.io/projected/36ac804d-cc67-4975-9b4d-6ccaed33f8e9-kube-api-access-m2ggc\") pod \"placement-operator-controller-manager-5d646b7d76-w2xzp\" (UID: \"36ac804d-cc67-4975-9b4d-6ccaed33f8e9\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.541098 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9sl\" (UniqueName: \"kubernetes.io/projected/9cbde52d-972f-41dc-b9b0-6cd275d013a8-kube-api-access-bk9sl\") pod \"neutron-operator-controller-manager-5d8f59fb49-rkxpv\" (UID: \"9cbde52d-972f-41dc-b9b0-6cd275d013a8\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.541133 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppgvr\" (UniqueName: \"kubernetes.io/projected/52786693-8d66-4a9d-aff2-b6d4b7c260be-kube-api-access-ppgvr\") pod \"mariadb-operator-controller-manager-c87fff755-txdkv\" (UID: \"52786693-8d66-4a9d-aff2-b6d4b7c260be\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.541183 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.541224 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fq52\" (UniqueName: \"kubernetes.io/projected/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-kube-api-access-7fq52\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.556157 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.557297 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.570751 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-lmpdk" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.575218 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ds9\" (UniqueName: \"kubernetes.io/projected/48e30eae-1a73-45ab-8ce9-0e64d820d7d6-kube-api-access-m2ds9\") pod \"nova-operator-controller-manager-6b8bc8d87d-s59f7\" (UID: \"48e30eae-1a73-45ab-8ce9-0e64d820d7d6\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.578891 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.585356 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.588393 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9sl\" (UniqueName: \"kubernetes.io/projected/9cbde52d-972f-41dc-b9b0-6cd275d013a8-kube-api-access-bk9sl\") pod \"neutron-operator-controller-manager-5d8f59fb49-rkxpv\" (UID: \"9cbde52d-972f-41dc-b9b0-6cd275d013a8\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.589249 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.593407 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppgvr\" (UniqueName: \"kubernetes.io/projected/52786693-8d66-4a9d-aff2-b6d4b7c260be-kube-api-access-ppgvr\") pod \"mariadb-operator-controller-manager-c87fff755-txdkv\" (UID: \"52786693-8d66-4a9d-aff2-b6d4b7c260be\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.595968 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.600196 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.601688 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.605505 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8fx5g" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.605875 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.638884 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.644595 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647455 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrrww\" (UniqueName: \"kubernetes.io/projected/8ab35638-b730-42d8-ab86-d7573f3b5083-kube-api-access-lrrww\") pod \"octavia-operator-controller-manager-7bd9774b6-pmcms\" (UID: \"8ab35638-b730-42d8-ab86-d7573f3b5083\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647534 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7g94\" (UniqueName: \"kubernetes.io/projected/d7747ccf-7f71-46a7-86b2-782561d8c41c-kube-api-access-w7g94\") pod \"ovn-operator-controller-manager-55db956ddc-2ntql\" (UID: \"d7747ccf-7f71-46a7-86b2-782561d8c41c\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647561 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ggc\" (UniqueName: \"kubernetes.io/projected/36ac804d-cc67-4975-9b4d-6ccaed33f8e9-kube-api-access-m2ggc\") pod \"placement-operator-controller-manager-5d646b7d76-w2xzp\" (UID: \"36ac804d-cc67-4975-9b4d-6ccaed33f8e9\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647578 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647653 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xdzs\" (UniqueName: \"kubernetes.io/projected/361a820d-5d68-41d8-834e-8faf6862ac00-kube-api-access-7xdzs\") pod \"swift-operator-controller-manager-547cbdb99f-h2sh7\" (UID: \"361a820d-5d68-41d8-834e-8faf6862ac00\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647728 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647859 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fq52\" (UniqueName: \"kubernetes.io/projected/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-kube-api-access-7fq52\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647905 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdbb\" (UniqueName: \"kubernetes.io/projected/115e9b6d-342e-4161-80a7-fd6786dd97ab-kube-api-access-stdbb\") pod \"test-operator-controller-manager-69797bbcbd-sc4sv\" (UID: \"115e9b6d-342e-4161-80a7-fd6786dd97ab\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647937 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ck8\" (UniqueName: \"kubernetes.io/projected/1344217d-c8f9-4f2a-aaba-588a1993e4d2-kube-api-access-d5ck8\") pod \"telemetry-operator-controller-manager-85cd9769bb-xp2tx\" (UID: \"1344217d-c8f9-4f2a-aaba-588a1993e4d2\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.647961 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.648096 4704 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.648135 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert podName:3c79bdf7-d523-40e2-8539-f28025e1a92f nodeName:}" failed. No retries permitted until 2026-01-22 16:45:51.648122453 +0000 UTC m=+1044.292669153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert") pod "infra-operator-controller-manager-54ccf4f85d-77kz5" (UID: "3c79bdf7-d523-40e2-8539-f28025e1a92f") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.648308 4704 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.648335 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert podName:a831d8ed-7a07-4105-9c36-c0ce0a60d1db nodeName:}" failed. No retries permitted until 2026-01-22 16:45:51.148325949 +0000 UTC m=+1043.792872649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" (UID: "a831d8ed-7a07-4105-9c36-c0ce0a60d1db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.651005 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.653183 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dfwqw" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.673845 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.677655 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.690484 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrrww\" (UniqueName: \"kubernetes.io/projected/8ab35638-b730-42d8-ab86-d7573f3b5083-kube-api-access-lrrww\") pod \"octavia-operator-controller-manager-7bd9774b6-pmcms\" (UID: \"8ab35638-b730-42d8-ab86-d7573f3b5083\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.692807 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fq52\" (UniqueName: \"kubernetes.io/projected/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-kube-api-access-7fq52\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.703229 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.705073 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7g94\" (UniqueName: \"kubernetes.io/projected/d7747ccf-7f71-46a7-86b2-782561d8c41c-kube-api-access-w7g94\") pod \"ovn-operator-controller-manager-55db956ddc-2ntql\" (UID: \"d7747ccf-7f71-46a7-86b2-782561d8c41c\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.710971 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.711443 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ggc\" (UniqueName: \"kubernetes.io/projected/36ac804d-cc67-4975-9b4d-6ccaed33f8e9-kube-api-access-m2ggc\") pod \"placement-operator-controller-manager-5d646b7d76-w2xzp\" (UID: \"36ac804d-cc67-4975-9b4d-6ccaed33f8e9\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.732675 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.758428 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stdbb\" (UniqueName: \"kubernetes.io/projected/115e9b6d-342e-4161-80a7-fd6786dd97ab-kube-api-access-stdbb\") pod \"test-operator-controller-manager-69797bbcbd-sc4sv\" (UID: \"115e9b6d-342e-4161-80a7-fd6786dd97ab\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.760594 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ck8\" (UniqueName: \"kubernetes.io/projected/1344217d-c8f9-4f2a-aaba-588a1993e4d2-kube-api-access-d5ck8\") pod \"telemetry-operator-controller-manager-85cd9769bb-xp2tx\" (UID: \"1344217d-c8f9-4f2a-aaba-588a1993e4d2\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.760763 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xdzs\" (UniqueName: \"kubernetes.io/projected/361a820d-5d68-41d8-834e-8faf6862ac00-kube-api-access-7xdzs\") pod \"swift-operator-controller-manager-547cbdb99f-h2sh7\" (UID: \"361a820d-5d68-41d8-834e-8faf6862ac00\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.764293 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6jb7m" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.764850 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.793092 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.802084 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.812833 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xdzs\" (UniqueName: \"kubernetes.io/projected/361a820d-5d68-41d8-834e-8faf6862ac00-kube-api-access-7xdzs\") pod \"swift-operator-controller-manager-547cbdb99f-h2sh7\" (UID: \"361a820d-5d68-41d8-834e-8faf6862ac00\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.823944 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.823632 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stdbb\" (UniqueName: \"kubernetes.io/projected/115e9b6d-342e-4161-80a7-fd6786dd97ab-kube-api-access-stdbb\") pod \"test-operator-controller-manager-69797bbcbd-sc4sv\" (UID: \"115e9b6d-342e-4161-80a7-fd6786dd97ab\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.851916 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ck8\" (UniqueName: \"kubernetes.io/projected/1344217d-c8f9-4f2a-aaba-588a1993e4d2-kube-api-access-d5ck8\") pod \"telemetry-operator-controller-manager-85cd9769bb-xp2tx\" (UID: \"1344217d-c8f9-4f2a-aaba-588a1993e4d2\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.862096 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl77x\" (UniqueName: \"kubernetes.io/projected/892df3b3-b506-4da6-8d5f-98b434e208fe-kube-api-access-wl77x\") pod \"watcher-operator-controller-manager-85b8fd6746-6j5cq\" (UID: \"892df3b3-b506-4da6-8d5f-98b434e208fe\") " pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.862139 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.862203 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22l2z\" (UniqueName: \"kubernetes.io/projected/649e2df4-8666-44f5-9038-275030931053-kube-api-access-22l2z\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.862236 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.862334 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.863524 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.867108 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6zlns" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.870276 4704 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.893705 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.910188 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx"] Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.928334 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.950427 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.964484 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5dg6\" (UniqueName: \"kubernetes.io/projected/cc5ed116-27c3-4b5d-9fe3-812c0eec8828-kube-api-access-r5dg6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nnggx\" (UID: \"cc5ed116-27c3-4b5d-9fe3-812c0eec8828\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.964541 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl77x\" (UniqueName: \"kubernetes.io/projected/892df3b3-b506-4da6-8d5f-98b434e208fe-kube-api-access-wl77x\") pod \"watcher-operator-controller-manager-85b8fd6746-6j5cq\" (UID: \"892df3b3-b506-4da6-8d5f-98b434e208fe\") " pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.964576 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.964652 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22l2z\" (UniqueName: \"kubernetes.io/projected/649e2df4-8666-44f5-9038-275030931053-kube-api-access-22l2z\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.964701 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.964843 4704 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.964902 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:51.464887954 +0000 UTC m=+1044.109434654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "webhook-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.965041 4704 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: E0122 16:45:50.965111 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:51.465093029 +0000 UTC m=+1044.109639729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "metrics-server-cert" not found Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.989974 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22l2z\" (UniqueName: \"kubernetes.io/projected/649e2df4-8666-44f5-9038-275030931053-kube-api-access-22l2z\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:50 crc kubenswrapper[4704]: I0122 16:45:50.997139 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.004484 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl77x\" (UniqueName: \"kubernetes.io/projected/892df3b3-b506-4da6-8d5f-98b434e208fe-kube-api-access-wl77x\") pod \"watcher-operator-controller-manager-85b8fd6746-6j5cq\" (UID: \"892df3b3-b506-4da6-8d5f-98b434e208fe\") " pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.020905 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.021577 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.034202 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.060447 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.065501 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5dg6\" (UniqueName: \"kubernetes.io/projected/cc5ed116-27c3-4b5d-9fe3-812c0eec8828-kube-api-access-r5dg6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nnggx\" (UID: \"cc5ed116-27c3-4b5d-9fe3-812c0eec8828\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.084970 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.129998 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.146549 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5dg6\" (UniqueName: \"kubernetes.io/projected/cc5ed116-27c3-4b5d-9fe3-812c0eec8828-kube-api-access-r5dg6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nnggx\" (UID: \"cc5ed116-27c3-4b5d-9fe3-812c0eec8828\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.154985 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.168987 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.169168 4704 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.169213 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert podName:a831d8ed-7a07-4105-9c36-c0ce0a60d1db nodeName:}" failed. No retries permitted until 2026-01-22 16:45:52.169199024 +0000 UTC m=+1044.813745714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" (UID: "a831d8ed-7a07-4105-9c36-c0ce0a60d1db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.202000 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" Jan 22 16:45:51 crc kubenswrapper[4704]: W0122 16:45:51.240090 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd19d6a3_d166_41b8_ac16_76d87c51cad5.slice/crio-b5138ad692d4d610492bf966c1ca9c6792fa7a8e83218e38dca3cfadd26e0940 WatchSource:0}: Error finding container b5138ad692d4d610492bf966c1ca9c6792fa7a8e83218e38dca3cfadd26e0940: Status 404 returned error can't find the container with id b5138ad692d4d610492bf966c1ca9c6792fa7a8e83218e38dca3cfadd26e0940 Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.341546 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.475893 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.476308 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.476104 4704 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.476598 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:52.476580408 +0000 UTC m=+1045.121127108 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "webhook-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.476539 4704 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.476967 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:52.476942358 +0000 UTC m=+1045.121489118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "metrics-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.532892 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.547930 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8"] Jan 22 16:45:51 crc kubenswrapper[4704]: W0122 16:45:51.563926 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17fe4464_7b64_4efe_b95b_89834259fc79.slice/crio-9019e98a8e493727cba9d24463dc92c2c262b794748be6d1667304b053c37096 WatchSource:0}: Error finding container 9019e98a8e493727cba9d24463dc92c2c262b794748be6d1667304b053c37096: Status 404 returned error can't find the container with id 9019e98a8e493727cba9d24463dc92c2c262b794748be6d1667304b053c37096 Jan 22 16:45:51 crc kubenswrapper[4704]: W0122 16:45:51.581617 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5539e8b_5116_4c16_9b27_6b5958450759.slice/crio-596ae4c76204fcd895b3abec8cf4aee41c89c8da036fba38da6f964f169c2b50 WatchSource:0}: Error finding container 596ae4c76204fcd895b3abec8cf4aee41c89c8da036fba38da6f964f169c2b50: Status 404 returned error can't find the container with id 596ae4c76204fcd895b3abec8cf4aee41c89c8da036fba38da6f964f169c2b50 Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.617053 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj"] Jan 22 16:45:51 crc kubenswrapper[4704]: W0122 16:45:51.642457 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50d3d899_4725_4b05_8dc8_84152766e963.slice/crio-e292f164389b72f255c0da07e7fb0c9f2b8344605f31ff239b10370aa3bb6de8 WatchSource:0}: Error finding container e292f164389b72f255c0da07e7fb0c9f2b8344605f31ff239b10370aa3bb6de8: Status 404 returned error can't find the container with id e292f164389b72f255c0da07e7fb0c9f2b8344605f31ff239b10370aa3bb6de8 Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.679555 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.679697 4704 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: E0122 16:45:51.679747 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert podName:3c79bdf7-d523-40e2-8539-f28025e1a92f nodeName:}" failed. No retries permitted until 2026-01-22 16:45:53.679733047 +0000 UTC m=+1046.324279747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert") pod "infra-operator-controller-manager-54ccf4f85d-77kz5" (UID: "3c79bdf7-d523-40e2-8539-f28025e1a92f") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.817228 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" event={"ID":"068092e4-bd7d-4f6f-8806-b794f3dbf696","Type":"ContainerStarted","Data":"67c9d1ddc04f275c0f616d3a38f7a3db3a1c3ec2d104b59cfca7d964bd7c4369"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.818066 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" event={"ID":"17fe4464-7b64-4efe-b95b-89834259fc79","Type":"ContainerStarted","Data":"9019e98a8e493727cba9d24463dc92c2c262b794748be6d1667304b053c37096"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.819034 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" event={"ID":"8be9d1b7-ad8a-41b0-a578-e26baafcf932","Type":"ContainerStarted","Data":"ad06e3f2bdcb3ca7ddeea6a3169061ce363edc4429b9f6d43ea4bdf5bb4c3992"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.820056 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" event={"ID":"b5539e8b-5116-4c16-9b27-6b5958450759","Type":"ContainerStarted","Data":"596ae4c76204fcd895b3abec8cf4aee41c89c8da036fba38da6f964f169c2b50"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.820823 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" event={"ID":"f166ae0f-3591-4099-bd69-62ec09ba977a","Type":"ContainerStarted","Data":"b69e4f27dd976afffc7467a6f6c478f805da1f825a79e98133b05f4302407dcd"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.821533 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" event={"ID":"50d3d899-4725-4b05-8dc8-84152766e963","Type":"ContainerStarted","Data":"e292f164389b72f255c0da07e7fb0c9f2b8344605f31ff239b10370aa3bb6de8"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.822194 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" event={"ID":"7974da72-060f-48cb-b06e-7fae3ecd377d","Type":"ContainerStarted","Data":"88507f6e5f4adfe7db01337c3af0c93c6387a20b8b4bd90ff85d4ee0aed75a44"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.823012 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" event={"ID":"dd19d6a3-d166-41b8-ac16-76d87c51cad5","Type":"ContainerStarted","Data":"b5138ad692d4d610492bf966c1ca9c6792fa7a8e83218e38dca3cfadd26e0940"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.829491 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" event={"ID":"170b5a59-8ffd-47a8-b2b9-a0f48167050d","Type":"ContainerStarted","Data":"36d096b2fe0c24576a16f4d1bab42690b09ee82fd39819a71779f5d00e5d988b"} Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.884893 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.894559 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.949775 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.967084 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7"] Jan 22 16:45:51 crc kubenswrapper[4704]: I0122 16:45:51.972035 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms"] Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.030756 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx"] Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.031914 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq"] Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.041567 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7"] Jan 22 16:45:52 crc kubenswrapper[4704]: W0122 16:45:52.056494 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod892df3b3_b506_4da6_8d5f_98b434e208fe.slice/crio-54b24e2ed5453eefe1668e23a6b717da8b8c0e62da937634b8a8e2757b4624ef WatchSource:0}: Error finding container 54b24e2ed5453eefe1668e23a6b717da8b8c0e62da937634b8a8e2757b4624ef: Status 404 returned error can't find the container with id 54b24e2ed5453eefe1668e23a6b717da8b8c0e62da937634b8a8e2757b4624ef Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.059377 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp"] Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.059998 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.196:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wl77x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-85b8fd6746-6j5cq_openstack-operators(892df3b3-b506-4da6-8d5f-98b434e208fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.063518 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.083419 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx"] Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.091709 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xdzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-h2sh7_openstack-operators(361a820d-5d68-41d8-834e-8faf6862ac00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.092975 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" podUID="361a820d-5d68-41d8-834e-8faf6862ac00" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.101421 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r5dg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nnggx_openstack-operators(cc5ed116-27c3-4b5d-9fe3-812c0eec8828): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.103325 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" podUID="cc5ed116-27c3-4b5d-9fe3-812c0eec8828" Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.192296 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.192542 4704 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.192599 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert podName:a831d8ed-7a07-4105-9c36-c0ce0a60d1db nodeName:}" failed. No retries permitted until 2026-01-22 16:45:54.192580185 +0000 UTC m=+1046.837126895 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" (UID: "a831d8ed-7a07-4105-9c36-c0ce0a60d1db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.217807 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv"] Jan 22 16:45:52 crc kubenswrapper[4704]: W0122 16:45:52.222609 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod115e9b6d_342e_4161_80a7_fd6786dd97ab.slice/crio-8c6c93d9f63362f894e7c624428d5eac7b3ee32e67adc783bb4c2d4bf10489d4 WatchSource:0}: Error finding container 8c6c93d9f63362f894e7c624428d5eac7b3ee32e67adc783bb4c2d4bf10489d4: Status 404 returned error can't find the container with id 8c6c93d9f63362f894e7c624428d5eac7b3ee32e67adc783bb4c2d4bf10489d4 Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.224871 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stdbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-sc4sv_openstack-operators(115e9b6d-342e-4161-80a7-fd6786dd97ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.226520 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" podUID="115e9b6d-342e-4161-80a7-fd6786dd97ab" Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.497058 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.497196 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.497265 4704 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.497349 4704 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.497354 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:54.497334474 +0000 UTC m=+1047.141881234 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "metrics-server-cert" not found Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.497421 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:54.497403856 +0000 UTC m=+1047.141950636 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "webhook-server-cert" not found Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.836843 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" event={"ID":"115e9b6d-342e-4161-80a7-fd6786dd97ab","Type":"ContainerStarted","Data":"8c6c93d9f63362f894e7c624428d5eac7b3ee32e67adc783bb4c2d4bf10489d4"} Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.838112 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" event={"ID":"cc5ed116-27c3-4b5d-9fe3-812c0eec8828","Type":"ContainerStarted","Data":"a0744295a5d2e6cf50450046ddda0247d769e08814ad5d61c797fb33fc765499"} Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.839038 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" podUID="115e9b6d-342e-4161-80a7-fd6786dd97ab" Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.839473 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" podUID="cc5ed116-27c3-4b5d-9fe3-812c0eec8828" Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.840057 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" event={"ID":"52786693-8d66-4a9d-aff2-b6d4b7c260be","Type":"ContainerStarted","Data":"45e31e59b8e8b654f0b52f196136c65eace451622f7ee4381e7e27ed1fc6f334"} Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.841656 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" event={"ID":"d7747ccf-7f71-46a7-86b2-782561d8c41c","Type":"ContainerStarted","Data":"f8ed07dd67674cd01427e4417fa96202fc15de3486d75277e6d9f98325746b19"} Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.843165 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" event={"ID":"48e30eae-1a73-45ab-8ce9-0e64d820d7d6","Type":"ContainerStarted","Data":"762664b45cd0288165ce0ed53713d393a75a88df9dec182e7b175ecb19fd969c"} Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.845015 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" event={"ID":"9cbde52d-972f-41dc-b9b0-6cd275d013a8","Type":"ContainerStarted","Data":"487be1dafe237b495019bb7f337aaf6d6aa08ba158b42a994bafdd1f0b836486"} Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.846802 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" event={"ID":"361a820d-5d68-41d8-834e-8faf6862ac00","Type":"ContainerStarted","Data":"c5649707ae2ec7a9ea07899ea3c2cc1e91ccaaf668d51b2a88249cb845878508"} Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.848038 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" event={"ID":"36ac804d-cc67-4975-9b4d-6ccaed33f8e9","Type":"ContainerStarted","Data":"377c32845b3c475643f9e45f41937296f0bcba61d1c396408e7624fb97bc339e"} Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.848140 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" podUID="361a820d-5d68-41d8-834e-8faf6862ac00" Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.849527 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" event={"ID":"1344217d-c8f9-4f2a-aaba-588a1993e4d2","Type":"ContainerStarted","Data":"205a586ef1fbb6d044f5a4e9832273931cedc8245a649a44e8cb600968889613"} Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.850626 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" event={"ID":"892df3b3-b506-4da6-8d5f-98b434e208fe","Type":"ContainerStarted","Data":"54b24e2ed5453eefe1668e23a6b717da8b8c0e62da937634b8a8e2757b4624ef"} Jan 22 16:45:52 crc kubenswrapper[4704]: E0122 16:45:52.851801 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" Jan 22 16:45:52 crc kubenswrapper[4704]: I0122 16:45:52.853082 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" event={"ID":"8ab35638-b730-42d8-ab86-d7573f3b5083","Type":"ContainerStarted","Data":"60b8900cbe3276131753eb3ee49286f6b92645f8a2f04a8e058ee63dcca923b4"} Jan 22 16:45:53 crc kubenswrapper[4704]: I0122 16:45:53.712875 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:53 crc kubenswrapper[4704]: E0122 16:45:53.712998 4704 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:53 crc kubenswrapper[4704]: E0122 16:45:53.713046 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert podName:3c79bdf7-d523-40e2-8539-f28025e1a92f nodeName:}" failed. No retries permitted until 2026-01-22 16:45:57.713032096 +0000 UTC m=+1050.357578796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert") pod "infra-operator-controller-manager-54ccf4f85d-77kz5" (UID: "3c79bdf7-d523-40e2-8539-f28025e1a92f") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:53 crc kubenswrapper[4704]: E0122 16:45:53.861425 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" podUID="361a820d-5d68-41d8-834e-8faf6862ac00" Jan 22 16:45:53 crc kubenswrapper[4704]: E0122 16:45:53.861463 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" Jan 22 16:45:53 crc kubenswrapper[4704]: E0122 16:45:53.861478 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" podUID="cc5ed116-27c3-4b5d-9fe3-812c0eec8828" Jan 22 16:45:53 crc kubenswrapper[4704]: E0122 16:45:53.862401 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" podUID="115e9b6d-342e-4161-80a7-fd6786dd97ab" Jan 22 16:45:54 crc kubenswrapper[4704]: I0122 16:45:54.219922 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:54 crc kubenswrapper[4704]: E0122 16:45:54.220112 4704 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:54 crc kubenswrapper[4704]: E0122 16:45:54.220166 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert podName:a831d8ed-7a07-4105-9c36-c0ce0a60d1db nodeName:}" failed. No retries permitted until 2026-01-22 16:45:58.220150181 +0000 UTC m=+1050.864696891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" (UID: "a831d8ed-7a07-4105-9c36-c0ce0a60d1db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:54 crc kubenswrapper[4704]: I0122 16:45:54.526288 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:54 crc kubenswrapper[4704]: I0122 16:45:54.526434 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:54 crc kubenswrapper[4704]: E0122 16:45:54.526655 4704 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:45:54 crc kubenswrapper[4704]: E0122 16:45:54.526744 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:58.526724272 +0000 UTC m=+1051.171270972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "webhook-server-cert" not found Jan 22 16:45:54 crc kubenswrapper[4704]: E0122 16:45:54.526662 4704 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:45:54 crc kubenswrapper[4704]: E0122 16:45:54.526997 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:45:58.526977919 +0000 UTC m=+1051.171524619 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "metrics-server-cert" not found Jan 22 16:45:57 crc kubenswrapper[4704]: I0122 16:45:57.781038 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:45:57 crc kubenswrapper[4704]: E0122 16:45:57.781254 4704 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:57 crc kubenswrapper[4704]: E0122 16:45:57.781906 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert podName:3c79bdf7-d523-40e2-8539-f28025e1a92f nodeName:}" failed. No retries permitted until 2026-01-22 16:46:05.781878395 +0000 UTC m=+1058.426425175 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert") pod "infra-operator-controller-manager-54ccf4f85d-77kz5" (UID: "3c79bdf7-d523-40e2-8539-f28025e1a92f") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:45:58 crc kubenswrapper[4704]: I0122 16:45:58.287667 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:45:58 crc kubenswrapper[4704]: E0122 16:45:58.288114 4704 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:58 crc kubenswrapper[4704]: E0122 16:45:58.288241 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert podName:a831d8ed-7a07-4105-9c36-c0ce0a60d1db nodeName:}" failed. No retries permitted until 2026-01-22 16:46:06.288210697 +0000 UTC m=+1058.932757437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" (UID: "a831d8ed-7a07-4105-9c36-c0ce0a60d1db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:45:58 crc kubenswrapper[4704]: I0122 16:45:58.592144 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:58 crc kubenswrapper[4704]: I0122 16:45:58.592235 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:45:58 crc kubenswrapper[4704]: E0122 16:45:58.592356 4704 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:45:58 crc kubenswrapper[4704]: E0122 16:45:58.592446 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:46:06.592423371 +0000 UTC m=+1059.236970081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "webhook-server-cert" not found Jan 22 16:45:58 crc kubenswrapper[4704]: E0122 16:45:58.592375 4704 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:45:58 crc kubenswrapper[4704]: E0122 16:45:58.592844 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:46:06.592779721 +0000 UTC m=+1059.237326441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "metrics-server-cert" not found Jan 22 16:46:05 crc kubenswrapper[4704]: I0122 16:46:05.812289 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:46:05 crc kubenswrapper[4704]: E0122 16:46:05.812544 4704 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 16:46:05 crc kubenswrapper[4704]: E0122 16:46:05.812977 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert podName:3c79bdf7-d523-40e2-8539-f28025e1a92f nodeName:}" failed. No retries permitted until 2026-01-22 16:46:21.812955493 +0000 UTC m=+1074.457502193 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert") pod "infra-operator-controller-manager-54ccf4f85d-77kz5" (UID: "3c79bdf7-d523-40e2-8539-f28025e1a92f") : secret "infra-operator-webhook-server-cert" not found Jan 22 16:46:06 crc kubenswrapper[4704]: I0122 16:46:06.321367 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:46:06 crc kubenswrapper[4704]: E0122 16:46:06.321570 4704 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:46:06 crc kubenswrapper[4704]: E0122 16:46:06.321644 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert podName:a831d8ed-7a07-4105-9c36-c0ce0a60d1db nodeName:}" failed. No retries permitted until 2026-01-22 16:46:22.321621902 +0000 UTC m=+1074.966168602 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" (UID: "a831d8ed-7a07-4105-9c36-c0ce0a60d1db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 16:46:06 crc kubenswrapper[4704]: I0122 16:46:06.626291 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:06 crc kubenswrapper[4704]: I0122 16:46:06.626416 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:06 crc kubenswrapper[4704]: E0122 16:46:06.626509 4704 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 16:46:06 crc kubenswrapper[4704]: E0122 16:46:06.626610 4704 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 16:46:06 crc kubenswrapper[4704]: E0122 16:46:06.626620 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:46:22.626590617 +0000 UTC m=+1075.271137337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "webhook-server-cert" not found Jan 22 16:46:06 crc kubenswrapper[4704]: E0122 16:46:06.626702 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs podName:649e2df4-8666-44f5-9038-275030931053 nodeName:}" failed. No retries permitted until 2026-01-22 16:46:22.62668213 +0000 UTC m=+1075.271228840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs") pod "openstack-operator-controller-manager-675f79667-ng9s7" (UID: "649e2df4-8666-44f5-9038-275030931053") : secret "metrics-server-cert" not found Jan 22 16:46:09 crc kubenswrapper[4704]: E0122 16:46:09.124440 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 22 16:46:09 crc kubenswrapper[4704]: E0122 16:46:09.125060 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrrww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-pmcms_openstack-operators(8ab35638-b730-42d8-ab86-d7573f3b5083): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:46:09 crc kubenswrapper[4704]: E0122 16:46:09.126616 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" podUID="8ab35638-b730-42d8-ab86-d7573f3b5083" Jan 22 16:46:09 crc kubenswrapper[4704]: E0122 16:46:09.682964 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f" Jan 22 16:46:09 crc kubenswrapper[4704]: E0122 16:46:09.683220 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hczt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-69cf5d4557-hd8tx_openstack-operators(068092e4-bd7d-4f6f-8806-b794f3dbf696): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:46:09 crc kubenswrapper[4704]: E0122 16:46:09.684437 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" podUID="068092e4-bd7d-4f6f-8806-b794f3dbf696" Jan 22 16:46:10 crc kubenswrapper[4704]: E0122 16:46:10.014388 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" podUID="068092e4-bd7d-4f6f-8806-b794f3dbf696" Jan 22 16:46:10 crc kubenswrapper[4704]: E0122 16:46:10.014467 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" podUID="8ab35638-b730-42d8-ab86-d7573f3b5083" Jan 22 16:46:10 crc kubenswrapper[4704]: E0122 16:46:10.231568 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 22 16:46:10 crc kubenswrapper[4704]: E0122 16:46:10.231785 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnmk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-nm6c8_openstack-operators(17fe4464-7b64-4efe-b95b-89834259fc79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:46:10 crc kubenswrapper[4704]: E0122 16:46:10.233245 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" podUID="17fe4464-7b64-4efe-b95b-89834259fc79" Jan 22 16:46:11 crc kubenswrapper[4704]: E0122 16:46:11.015940 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" podUID="17fe4464-7b64-4efe-b95b-89834259fc79" Jan 22 16:46:19 crc kubenswrapper[4704]: I0122 16:46:19.086414 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:46:19 crc kubenswrapper[4704]: I0122 16:46:19.086856 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:46:21 crc kubenswrapper[4704]: I0122 16:46:21.850901 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:46:21 crc kubenswrapper[4704]: I0122 16:46:21.860625 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c79bdf7-d523-40e2-8539-f28025e1a92f-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-77kz5\" (UID: \"3c79bdf7-d523-40e2-8539-f28025e1a92f\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.026019 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.357073 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.361292 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a831d8ed-7a07-4105-9c36-c0ce0a60d1db-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544gxws\" (UID: \"a831d8ed-7a07-4105-9c36-c0ce0a60d1db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.652955 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.661252 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.661362 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.665831 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-metrics-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.666320 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/649e2df4-8666-44f5-9038-275030931053-webhook-certs\") pod \"openstack-operator-controller-manager-675f79667-ng9s7\" (UID: \"649e2df4-8666-44f5-9038-275030931053\") " pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:22 crc kubenswrapper[4704]: I0122 16:46:22.876353 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:25 crc kubenswrapper[4704]: E0122 16:46:25.513762 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 22 16:46:25 crc kubenswrapper[4704]: E0122 16:46:25.513996 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xdzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-h2sh7_openstack-operators(361a820d-5d68-41d8-834e-8faf6862ac00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:46:25 crc kubenswrapper[4704]: E0122 16:46:25.515176 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" podUID="361a820d-5d68-41d8-834e-8faf6862ac00" Jan 22 16:46:25 crc kubenswrapper[4704]: E0122 16:46:25.543448 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4" Jan 22 16:46:25 crc kubenswrapper[4704]: E0122 16:46:25.543660 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bk9sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-rkxpv_openstack-operators(9cbde52d-972f-41dc-b9b0-6cd275d013a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:46:25 crc kubenswrapper[4704]: E0122 16:46:25.544862 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" podUID="9cbde52d-972f-41dc-b9b0-6cd275d013a8" Jan 22 16:46:26 crc kubenswrapper[4704]: E0122 16:46:26.118482 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" podUID="9cbde52d-972f-41dc-b9b0-6cd275d013a8" Jan 22 16:46:26 crc kubenswrapper[4704]: E0122 16:46:26.945123 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 22 16:46:26 crc kubenswrapper[4704]: E0122 16:46:26.945703 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stdbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-sc4sv_openstack-operators(115e9b6d-342e-4161-80a7-fd6786dd97ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:46:26 crc kubenswrapper[4704]: E0122 16:46:26.946957 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" podUID="115e9b6d-342e-4161-80a7-fd6786dd97ab" Jan 22 16:46:27 crc kubenswrapper[4704]: E0122 16:46:27.355766 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 22 16:46:27 crc kubenswrapper[4704]: E0122 16:46:27.355946 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r5dg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nnggx_openstack-operators(cc5ed116-27c3-4b5d-9fe3-812c0eec8828): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:46:27 crc kubenswrapper[4704]: E0122 16:46:27.357705 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" podUID="cc5ed116-27c3-4b5d-9fe3-812c0eec8828" Jan 22 16:46:28 crc kubenswrapper[4704]: I0122 16:46:28.202419 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws"] Jan 22 16:46:28 crc kubenswrapper[4704]: I0122 16:46:28.571959 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5"] Jan 22 16:46:28 crc kubenswrapper[4704]: W0122 16:46:28.615986 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c79bdf7_d523_40e2_8539_f28025e1a92f.slice/crio-439efe53abda17537415f0b3639caf589ab7f7bf4c5b1725e45bb333b9dadbd0 WatchSource:0}: Error finding container 439efe53abda17537415f0b3639caf589ab7f7bf4c5b1725e45bb333b9dadbd0: Status 404 returned error can't find the container with id 439efe53abda17537415f0b3639caf589ab7f7bf4c5b1725e45bb333b9dadbd0 Jan 22 16:46:28 crc kubenswrapper[4704]: I0122 16:46:28.649387 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7"] Jan 22 16:46:28 crc kubenswrapper[4704]: W0122 16:46:28.679699 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod649e2df4_8666_44f5_9038_275030931053.slice/crio-5ce41bb1f902e5ef7bab9d4af58b9eec3aa39bb1bd640db2a54f5f60d08eef9f WatchSource:0}: Error finding container 5ce41bb1f902e5ef7bab9d4af58b9eec3aa39bb1bd640db2a54f5f60d08eef9f: Status 404 returned error can't find the container with id 5ce41bb1f902e5ef7bab9d4af58b9eec3aa39bb1bd640db2a54f5f60d08eef9f Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.153908 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" event={"ID":"48e30eae-1a73-45ab-8ce9-0e64d820d7d6","Type":"ContainerStarted","Data":"5fcf251d49ed9306b16f058b5e21a274755ea736aa91f64d94d31d171c518306"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.154592 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.164537 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" event={"ID":"d7747ccf-7f71-46a7-86b2-782561d8c41c","Type":"ContainerStarted","Data":"164e58b8d00ccbcb53ed2f288ddfd8839f07402020c2ae7b4b38501d9376dbca"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.164757 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.174426 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" event={"ID":"170b5a59-8ffd-47a8-b2b9-a0f48167050d","Type":"ContainerStarted","Data":"2ba04ca5e57bd09c57f52a0bf51f3711438cdec78155ae241ea06dd10602d4d7"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.175373 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.191356 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" podStartSLOduration=3.806300628 podStartE2EDuration="39.19133234s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.94104751 +0000 UTC m=+1044.585594210" lastFinishedPulling="2026-01-22 16:46:27.326079222 +0000 UTC m=+1079.970625922" observedRunningTime="2026-01-22 16:46:29.176602041 +0000 UTC m=+1081.821148741" watchObservedRunningTime="2026-01-22 16:46:29.19133234 +0000 UTC m=+1081.835879040" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.192220 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" event={"ID":"892df3b3-b506-4da6-8d5f-98b434e208fe","Type":"ContainerStarted","Data":"a13d04d0427c0c5756ce3b580f60304a7b403d840ab720912baa5abce421e382"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.192982 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.203898 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" event={"ID":"8ab35638-b730-42d8-ab86-d7573f3b5083","Type":"ContainerStarted","Data":"42f5459efcb0ca334de96b6377d70a7e6d2246566207cb805351785fb99c9831"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.204503 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.209157 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" event={"ID":"b5539e8b-5116-4c16-9b27-6b5958450759","Type":"ContainerStarted","Data":"541044cc55e3314ae8fd304a2dc069f2995b6dd68ece1a31c8bcd31994e2e0b2"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.209890 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.210574 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" podStartSLOduration=4.218689139 podStartE2EDuration="39.210562987s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.941178714 +0000 UTC m=+1044.585725414" lastFinishedPulling="2026-01-22 16:46:26.933052572 +0000 UTC m=+1079.577599262" observedRunningTime="2026-01-22 16:46:29.207536021 +0000 UTC m=+1081.852082721" watchObservedRunningTime="2026-01-22 16:46:29.210562987 +0000 UTC m=+1081.855109677" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.232508 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" event={"ID":"52786693-8d66-4a9d-aff2-b6d4b7c260be","Type":"ContainerStarted","Data":"f3856ef91186ea0e15412af0cc9e2b80f5e1d8a5cd6ae9eba74105b7fb228196"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.233293 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.245998 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" event={"ID":"8be9d1b7-ad8a-41b0-a578-e26baafcf932","Type":"ContainerStarted","Data":"446a7a2f7968c088296ed9662b3151ee30ca2c6c5659723f9ab28def8f4fefc0"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.246615 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.252581 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" event={"ID":"a831d8ed-7a07-4105-9c36-c0ce0a60d1db","Type":"ContainerStarted","Data":"f6817f2549a1ac0add5149dc76fdb0dbf2c479d5a48c461b14b958d59aecbc20"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.255327 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" podStartSLOduration=3.801682092 podStartE2EDuration="40.25530948s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:50.869256063 +0000 UTC m=+1043.513802763" lastFinishedPulling="2026-01-22 16:46:27.322883451 +0000 UTC m=+1079.967430151" observedRunningTime="2026-01-22 16:46:29.249747482 +0000 UTC m=+1081.894294182" watchObservedRunningTime="2026-01-22 16:46:29.25530948 +0000 UTC m=+1081.899856180" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.259124 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" event={"ID":"068092e4-bd7d-4f6f-8806-b794f3dbf696","Type":"ContainerStarted","Data":"0254e8fbb781cc36b261b0e4b6125a251f1947aa07037e71b3206fe2a9758e1a"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.259595 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.265367 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" event={"ID":"17fe4464-7b64-4efe-b95b-89834259fc79","Type":"ContainerStarted","Data":"5f79b89b1494607a1c490f8bbf31a23071afd5e21bfad3663c650501658f2fa3"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.266443 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.289602 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" event={"ID":"1344217d-c8f9-4f2a-aaba-588a1993e4d2","Type":"ContainerStarted","Data":"44f9a8dfd63a97ebf167a99e3558da696750f672226c95174224527546a94393"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.289969 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.291983 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" podStartSLOduration=2.945305277 podStartE2EDuration="39.291971523s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.951904949 +0000 UTC m=+1044.596451649" lastFinishedPulling="2026-01-22 16:46:28.298571195 +0000 UTC m=+1080.943117895" observedRunningTime="2026-01-22 16:46:29.286802746 +0000 UTC m=+1081.931349446" watchObservedRunningTime="2026-01-22 16:46:29.291971523 +0000 UTC m=+1081.936518223" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.294331 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" event={"ID":"3c79bdf7-d523-40e2-8539-f28025e1a92f","Type":"ContainerStarted","Data":"439efe53abda17537415f0b3639caf589ab7f7bf4c5b1725e45bb333b9dadbd0"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.303495 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" event={"ID":"649e2df4-8666-44f5-9038-275030931053","Type":"ContainerStarted","Data":"89f6fee472bcd10c5bf1135f0d37720d77431309b948a4579a16a6dd9ff457e4"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.303534 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" event={"ID":"649e2df4-8666-44f5-9038-275030931053","Type":"ContainerStarted","Data":"5ce41bb1f902e5ef7bab9d4af58b9eec3aa39bb1bd640db2a54f5f60d08eef9f"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.303731 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.316108 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" event={"ID":"50d3d899-4725-4b05-8dc8-84152766e963","Type":"ContainerStarted","Data":"d23ccef5858f8246076730405c90ef71d3b1e3fe22bdce745574700c7964404c"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.316523 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" podStartSLOduration=4.39645876 podStartE2EDuration="40.316506161s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.405573268 +0000 UTC m=+1044.050119968" lastFinishedPulling="2026-01-22 16:46:27.325620669 +0000 UTC m=+1079.970167369" observedRunningTime="2026-01-22 16:46:29.31121257 +0000 UTC m=+1081.955759270" watchObservedRunningTime="2026-01-22 16:46:29.316506161 +0000 UTC m=+1081.961052861" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.316920 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.344650 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" event={"ID":"dd19d6a3-d166-41b8-ac16-76d87c51cad5","Type":"ContainerStarted","Data":"aab322a20c53abe84a0f1e0682b492875246305175dadb70eb463b478a66a7a5"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.344751 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.347389 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" podStartSLOduration=5.474755883 podStartE2EDuration="40.347376169s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.595633315 +0000 UTC m=+1044.240180015" lastFinishedPulling="2026-01-22 16:46:26.468253601 +0000 UTC m=+1079.112800301" observedRunningTime="2026-01-22 16:46:29.344531008 +0000 UTC m=+1081.989077708" watchObservedRunningTime="2026-01-22 16:46:29.347376169 +0000 UTC m=+1081.991922869" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.354934 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" event={"ID":"7974da72-060f-48cb-b06e-7fae3ecd377d","Type":"ContainerStarted","Data":"8d5683da75fbc4abc6f3d7518c283f81ab7dba1793a03e15a39e5b9e20d9aa9e"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.355487 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.385287 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" event={"ID":"f166ae0f-3591-4099-bd69-62ec09ba977a","Type":"ContainerStarted","Data":"bc8aae6ce0871d444aed5887df749e44fbc07312a006179b55c56f81e769656a"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.385842 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.400126 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" event={"ID":"36ac804d-cc67-4975-9b4d-6ccaed33f8e9","Type":"ContainerStarted","Data":"89f03fdead00cff660079e435fb1b995f08ddb000a812f687a8b31dc8a35f31e"} Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.400829 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.541217 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" podStartSLOduration=5.156897081 podStartE2EDuration="40.541201632s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.941217885 +0000 UTC m=+1044.585764585" lastFinishedPulling="2026-01-22 16:46:27.325522436 +0000 UTC m=+1079.970069136" observedRunningTime="2026-01-22 16:46:29.436043371 +0000 UTC m=+1082.080590071" watchObservedRunningTime="2026-01-22 16:46:29.541201632 +0000 UTC m=+1082.185748332" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.619994 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" podStartSLOduration=3.386448446 podStartE2EDuration="39.619973633s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:52.05985033 +0000 UTC m=+1044.704397030" lastFinishedPulling="2026-01-22 16:46:28.293375517 +0000 UTC m=+1080.937922217" observedRunningTime="2026-01-22 16:46:29.554100849 +0000 UTC m=+1082.198647549" watchObservedRunningTime="2026-01-22 16:46:29.619973633 +0000 UTC m=+1082.264520333" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.627166 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" podStartSLOduration=3.364516937 podStartE2EDuration="40.627146807s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:50.885616759 +0000 UTC m=+1043.530163459" lastFinishedPulling="2026-01-22 16:46:28.148246629 +0000 UTC m=+1080.792793329" observedRunningTime="2026-01-22 16:46:29.621728793 +0000 UTC m=+1082.266275493" watchObservedRunningTime="2026-01-22 16:46:29.627146807 +0000 UTC m=+1082.271693507" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.666372 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" podStartSLOduration=5.835795823 podStartE2EDuration="40.666356303s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.649572509 +0000 UTC m=+1044.294119209" lastFinishedPulling="2026-01-22 16:46:26.480132989 +0000 UTC m=+1079.124679689" observedRunningTime="2026-01-22 16:46:29.663277875 +0000 UTC m=+1082.307824575" watchObservedRunningTime="2026-01-22 16:46:29.666356303 +0000 UTC m=+1082.310903003" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.744363 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" podStartSLOduration=5.10056804 podStartE2EDuration="40.744330171s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.28891677 +0000 UTC m=+1043.933463470" lastFinishedPulling="2026-01-22 16:46:26.932678901 +0000 UTC m=+1079.577225601" observedRunningTime="2026-01-22 16:46:29.704820597 +0000 UTC m=+1082.349367297" watchObservedRunningTime="2026-01-22 16:46:29.744330171 +0000 UTC m=+1082.388876871" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.745272 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" podStartSLOduration=4.511475867 podStartE2EDuration="39.745268307s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:52.091299894 +0000 UTC m=+1044.735846584" lastFinishedPulling="2026-01-22 16:46:27.325092324 +0000 UTC m=+1079.969639024" observedRunningTime="2026-01-22 16:46:29.743012393 +0000 UTC m=+1082.387559093" watchObservedRunningTime="2026-01-22 16:46:29.745268307 +0000 UTC m=+1082.389815007" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.813111 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" podStartSLOduration=5.053755957 podStartE2EDuration="40.813092216s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.173339652 +0000 UTC m=+1043.817886352" lastFinishedPulling="2026-01-22 16:46:26.932675911 +0000 UTC m=+1079.577222611" observedRunningTime="2026-01-22 16:46:29.795680991 +0000 UTC m=+1082.440227691" watchObservedRunningTime="2026-01-22 16:46:29.813092216 +0000 UTC m=+1082.457638916" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.926298 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" podStartSLOduration=4.893576031 podStartE2EDuration="40.926278366s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.292251935 +0000 UTC m=+1043.936798625" lastFinishedPulling="2026-01-22 16:46:27.32495427 +0000 UTC m=+1079.969500960" observedRunningTime="2026-01-22 16:46:29.848067491 +0000 UTC m=+1082.492614191" watchObservedRunningTime="2026-01-22 16:46:29.926278366 +0000 UTC m=+1082.570825066" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.980961 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" podStartSLOduration=39.980941911 podStartE2EDuration="39.980941911s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:46:29.928115258 +0000 UTC m=+1082.572661958" watchObservedRunningTime="2026-01-22 16:46:29.980941911 +0000 UTC m=+1082.625488621" Jan 22 16:46:29 crc kubenswrapper[4704]: I0122 16:46:29.988687 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" podStartSLOduration=5.566358664 podStartE2EDuration="39.988662941s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:52.045943774 +0000 UTC m=+1044.690490474" lastFinishedPulling="2026-01-22 16:46:26.468248051 +0000 UTC m=+1079.112794751" observedRunningTime="2026-01-22 16:46:29.980334334 +0000 UTC m=+1082.624881034" watchObservedRunningTime="2026-01-22 16:46:29.988662941 +0000 UTC m=+1082.633209641" Jan 22 16:46:30 crc kubenswrapper[4704]: I0122 16:46:30.009316 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" podStartSLOduration=4.4449537 podStartE2EDuration="41.009300048s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.581952005 +0000 UTC m=+1044.226498705" lastFinishedPulling="2026-01-22 16:46:28.146298343 +0000 UTC m=+1080.790845053" observedRunningTime="2026-01-22 16:46:30.008070743 +0000 UTC m=+1082.652617463" watchObservedRunningTime="2026-01-22 16:46:30.009300048 +0000 UTC m=+1082.653846748" Jan 22 16:46:35 crc kubenswrapper[4704]: I0122 16:46:35.449303 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" event={"ID":"a831d8ed-7a07-4105-9c36-c0ce0a60d1db","Type":"ContainerStarted","Data":"07fe4373081327bc09c09f95c0f935e8aeef42f0bd7a4ef48429a5bf9f1e7109"} Jan 22 16:46:35 crc kubenswrapper[4704]: I0122 16:46:35.449669 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:46:35 crc kubenswrapper[4704]: I0122 16:46:35.451054 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" event={"ID":"3c79bdf7-d523-40e2-8539-f28025e1a92f","Type":"ContainerStarted","Data":"e830924afe79d6041fa3cb30136afc36fc8762bfcf90553bb3589893a64ca15f"} Jan 22 16:46:35 crc kubenswrapper[4704]: I0122 16:46:35.451204 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:46:35 crc kubenswrapper[4704]: I0122 16:46:35.499273 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" podStartSLOduration=39.319300491 podStartE2EDuration="45.499253841s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:46:28.270647851 +0000 UTC m=+1080.915194551" lastFinishedPulling="2026-01-22 16:46:34.450601201 +0000 UTC m=+1087.095147901" observedRunningTime="2026-01-22 16:46:35.478412338 +0000 UTC m=+1088.122959038" watchObservedRunningTime="2026-01-22 16:46:35.499253841 +0000 UTC m=+1088.143800541" Jan 22 16:46:35 crc kubenswrapper[4704]: I0122 16:46:35.501497 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" podStartSLOduration=40.789114145 podStartE2EDuration="46.501485944s" podCreationTimestamp="2026-01-22 16:45:49 +0000 UTC" firstStartedPulling="2026-01-22 16:46:28.630140457 +0000 UTC m=+1081.274687157" lastFinishedPulling="2026-01-22 16:46:34.342512246 +0000 UTC m=+1086.987058956" observedRunningTime="2026-01-22 16:46:35.497061468 +0000 UTC m=+1088.141608168" watchObservedRunningTime="2026-01-22 16:46:35.501485944 +0000 UTC m=+1088.146032644" Jan 22 16:46:37 crc kubenswrapper[4704]: E0122 16:46:37.641316 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" podUID="361a820d-5d68-41d8-834e-8faf6862ac00" Jan 22 16:46:38 crc kubenswrapper[4704]: E0122 16:46:38.636335 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" podUID="cc5ed116-27c3-4b5d-9fe3-812c0eec8828" Jan 22 16:46:39 crc kubenswrapper[4704]: I0122 16:46:39.483947 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" event={"ID":"9cbde52d-972f-41dc-b9b0-6cd275d013a8","Type":"ContainerStarted","Data":"bf50d398a726199b7243d7f8d1ddd1d76f4f2d58d93d65210756fbc9a56cd789"} Jan 22 16:46:39 crc kubenswrapper[4704]: I0122 16:46:39.484985 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" Jan 22 16:46:39 crc kubenswrapper[4704]: I0122 16:46:39.502389 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" podStartSLOduration=3.066361981 podStartE2EDuration="49.502373559s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:51.940194496 +0000 UTC m=+1044.584741196" lastFinishedPulling="2026-01-22 16:46:38.376206064 +0000 UTC m=+1091.020752774" observedRunningTime="2026-01-22 16:46:39.499163897 +0000 UTC m=+1092.143710607" watchObservedRunningTime="2026-01-22 16:46:39.502373559 +0000 UTC m=+1092.146920259" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.055897 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-g4q7s" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.108474 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-hd8tx" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.202728 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-5w58r" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.218893 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-ggdqg" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.350501 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-nggqz" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.381119 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-b6xnp" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.436042 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4hzcj" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.492717 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nm6c8" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.609507 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7jps5" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.674201 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-txdkv" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.796389 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-s59f7" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.805096 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-pmcms" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.896292 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2ntql" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.933152 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-w2xzp" Jan 22 16:46:40 crc kubenswrapper[4704]: I0122 16:46:40.999627 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-xp2tx" Jan 22 16:46:41 crc kubenswrapper[4704]: I0122 16:46:41.062355 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:46:41 crc kubenswrapper[4704]: E0122 16:46:41.635967 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" podUID="115e9b6d-342e-4161-80a7-fd6786dd97ab" Jan 22 16:46:42 crc kubenswrapper[4704]: I0122 16:46:42.032169 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-77kz5" Jan 22 16:46:42 crc kubenswrapper[4704]: I0122 16:46:42.662465 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544gxws" Jan 22 16:46:42 crc kubenswrapper[4704]: I0122 16:46:42.885925 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-675f79667-ng9s7" Jan 22 16:46:49 crc kubenswrapper[4704]: I0122 16:46:49.086935 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:46:49 crc kubenswrapper[4704]: I0122 16:46:49.086987 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:46:49 crc kubenswrapper[4704]: I0122 16:46:49.087022 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:46:49 crc kubenswrapper[4704]: I0122 16:46:49.087549 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88cf191bb3e64eb833ed16834e1430c8c271d9cb96c329f4eba42d0922f7467f"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:46:49 crc kubenswrapper[4704]: I0122 16:46:49.087594 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://88cf191bb3e64eb833ed16834e1430c8c271d9cb96c329f4eba42d0922f7467f" gracePeriod=600 Jan 22 16:46:50 crc kubenswrapper[4704]: I0122 16:46:50.590997 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="88cf191bb3e64eb833ed16834e1430c8c271d9cb96c329f4eba42d0922f7467f" exitCode=0 Jan 22 16:46:50 crc kubenswrapper[4704]: I0122 16:46:50.591106 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"88cf191bb3e64eb833ed16834e1430c8c271d9cb96c329f4eba42d0922f7467f"} Jan 22 16:46:50 crc kubenswrapper[4704]: I0122 16:46:50.591370 4704 scope.go:117] "RemoveContainer" containerID="c8865a0e2381cbeec53f87553007cf63e787be4f45fe167d5da2b4f406dd127d" Jan 22 16:46:50 crc kubenswrapper[4704]: I0122 16:46:50.707008 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-rkxpv" Jan 22 16:46:51 crc kubenswrapper[4704]: I0122 16:46:51.601072 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"33c05c7b04e52a99d7618873c0e8cfbae6126223bfd8e14eabf1b1f805e4a907"} Jan 22 16:46:51 crc kubenswrapper[4704]: I0122 16:46:51.602959 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" event={"ID":"cc5ed116-27c3-4b5d-9fe3-812c0eec8828","Type":"ContainerStarted","Data":"da8ca582120d20310264fef56807fde6a57070dc3f2536e7d3efa906e8e097a3"} Jan 22 16:46:51 crc kubenswrapper[4704]: I0122 16:46:51.637547 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nnggx" podStartSLOduration=3.237485758 podStartE2EDuration="1m1.637525319s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:52.101281828 +0000 UTC m=+1044.745828528" lastFinishedPulling="2026-01-22 16:46:50.501321379 +0000 UTC m=+1103.145868089" observedRunningTime="2026-01-22 16:46:51.637112747 +0000 UTC m=+1104.281659457" watchObservedRunningTime="2026-01-22 16:46:51.637525319 +0000 UTC m=+1104.282072029" Jan 22 16:46:53 crc kubenswrapper[4704]: I0122 16:46:53.621075 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" event={"ID":"361a820d-5d68-41d8-834e-8faf6862ac00","Type":"ContainerStarted","Data":"b3a46f8c86375b798235aebebdc079ea8cb483060264741445a58f0b0f1a91cc"} Jan 22 16:46:53 crc kubenswrapper[4704]: I0122 16:46:53.621848 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" Jan 22 16:46:53 crc kubenswrapper[4704]: I0122 16:46:53.641892 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" podStartSLOduration=2.632566152 podStartE2EDuration="1m3.641865863s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:52.091604313 +0000 UTC m=+1044.736151003" lastFinishedPulling="2026-01-22 16:46:53.100904014 +0000 UTC m=+1105.745450714" observedRunningTime="2026-01-22 16:46:53.637463707 +0000 UTC m=+1106.282010487" watchObservedRunningTime="2026-01-22 16:46:53.641865863 +0000 UTC m=+1106.286412573" Jan 22 16:46:55 crc kubenswrapper[4704]: I0122 16:46:55.641963 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" event={"ID":"115e9b6d-342e-4161-80a7-fd6786dd97ab","Type":"ContainerStarted","Data":"d2c72f9b9de539d8c733c5f02e53cc6b9d48826e8c8f46a519bd440bfcd86a59"} Jan 22 16:46:55 crc kubenswrapper[4704]: I0122 16:46:55.642552 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" Jan 22 16:46:55 crc kubenswrapper[4704]: I0122 16:46:55.667583 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" podStartSLOduration=3.029852911 podStartE2EDuration="1m5.667561314s" podCreationTimestamp="2026-01-22 16:45:50 +0000 UTC" firstStartedPulling="2026-01-22 16:45:52.224717989 +0000 UTC m=+1044.869264689" lastFinishedPulling="2026-01-22 16:46:54.862426342 +0000 UTC m=+1107.506973092" observedRunningTime="2026-01-22 16:46:55.660415391 +0000 UTC m=+1108.304962101" watchObservedRunningTime="2026-01-22 16:46:55.667561314 +0000 UTC m=+1108.312108014" Jan 22 16:47:00 crc kubenswrapper[4704]: I0122 16:47:00.952935 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-h2sh7" Jan 22 16:47:01 crc kubenswrapper[4704]: I0122 16:47:01.024590 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-sc4sv" Jan 22 16:47:06 crc kubenswrapper[4704]: I0122 16:47:06.314887 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq"] Jan 22 16:47:06 crc kubenswrapper[4704]: I0122 16:47:06.316116 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" containerName="manager" containerID="cri-o://a13d04d0427c0c5756ce3b580f60304a7b403d840ab720912baa5abce421e382" gracePeriod=10 Jan 22 16:47:06 crc kubenswrapper[4704]: I0122 16:47:06.408853 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq"] Jan 22 16:47:06 crc kubenswrapper[4704]: I0122 16:47:06.409053 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" podUID="59835123-6708-4c93-96da-82bcddc141c7" containerName="operator" containerID="cri-o://c7cdc49cdd3619c24c3c2ccf564c13646c7cbd27482001fcedfbbe4b76e98fed" gracePeriod=10 Jan 22 16:47:07 crc kubenswrapper[4704]: I0122 16:47:07.739343 4704 generic.go:334] "Generic (PLEG): container finished" podID="892df3b3-b506-4da6-8d5f-98b434e208fe" containerID="a13d04d0427c0c5756ce3b580f60304a7b403d840ab720912baa5abce421e382" exitCode=0 Jan 22 16:47:07 crc kubenswrapper[4704]: I0122 16:47:07.739422 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" event={"ID":"892df3b3-b506-4da6-8d5f-98b434e208fe","Type":"ContainerDied","Data":"a13d04d0427c0c5756ce3b580f60304a7b403d840ab720912baa5abce421e382"} Jan 22 16:47:07 crc kubenswrapper[4704]: I0122 16:47:07.740923 4704 generic.go:334] "Generic (PLEG): container finished" podID="59835123-6708-4c93-96da-82bcddc141c7" containerID="c7cdc49cdd3619c24c3c2ccf564c13646c7cbd27482001fcedfbbe4b76e98fed" exitCode=0 Jan 22 16:47:07 crc kubenswrapper[4704]: I0122 16:47:07.740987 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" event={"ID":"59835123-6708-4c93-96da-82bcddc141c7","Type":"ContainerDied","Data":"c7cdc49cdd3619c24c3c2ccf564c13646c7cbd27482001fcedfbbe4b76e98fed"} Jan 22 16:47:07 crc kubenswrapper[4704]: I0122 16:47:07.983849 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.097814 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d47x\" (UniqueName: \"kubernetes.io/projected/59835123-6708-4c93-96da-82bcddc141c7-kube-api-access-2d47x\") pod \"59835123-6708-4c93-96da-82bcddc141c7\" (UID: \"59835123-6708-4c93-96da-82bcddc141c7\") " Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.108863 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59835123-6708-4c93-96da-82bcddc141c7-kube-api-access-2d47x" (OuterVolumeSpecName: "kube-api-access-2d47x") pod "59835123-6708-4c93-96da-82bcddc141c7" (UID: "59835123-6708-4c93-96da-82bcddc141c7"). InnerVolumeSpecName "kube-api-access-2d47x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.198971 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d47x\" (UniqueName: \"kubernetes.io/projected/59835123-6708-4c93-96da-82bcddc141c7-kube-api-access-2d47x\") on node \"crc\" DevicePath \"\"" Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.755729 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" event={"ID":"59835123-6708-4c93-96da-82bcddc141c7","Type":"ContainerDied","Data":"c25df24cac52407aa5842fb92a02ee6636d7e059dfb84d5ee0cf9213a9170979"} Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.755872 4704 scope.go:117] "RemoveContainer" containerID="c7cdc49cdd3619c24c3c2ccf564c13646c7cbd27482001fcedfbbe4b76e98fed" Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.755960 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq" Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.806958 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq"] Jan 22 16:47:08 crc kubenswrapper[4704]: I0122 16:47:08.816871 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b7565899b-9x9fq"] Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.259412 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.418386 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl77x\" (UniqueName: \"kubernetes.io/projected/892df3b3-b506-4da6-8d5f-98b434e208fe-kube-api-access-wl77x\") pod \"892df3b3-b506-4da6-8d5f-98b434e208fe\" (UID: \"892df3b3-b506-4da6-8d5f-98b434e208fe\") " Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.423450 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892df3b3-b506-4da6-8d5f-98b434e208fe-kube-api-access-wl77x" (OuterVolumeSpecName: "kube-api-access-wl77x") pod "892df3b3-b506-4da6-8d5f-98b434e208fe" (UID: "892df3b3-b506-4da6-8d5f-98b434e208fe"). InnerVolumeSpecName "kube-api-access-wl77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.520274 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl77x\" (UniqueName: \"kubernetes.io/projected/892df3b3-b506-4da6-8d5f-98b434e208fe-kube-api-access-wl77x\") on node \"crc\" DevicePath \"\"" Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.646124 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59835123-6708-4c93-96da-82bcddc141c7" path="/var/lib/kubelet/pods/59835123-6708-4c93-96da-82bcddc141c7/volumes" Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.765638 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" event={"ID":"892df3b3-b506-4da6-8d5f-98b434e208fe","Type":"ContainerDied","Data":"54b24e2ed5453eefe1668e23a6b717da8b8c0e62da937634b8a8e2757b4624ef"} Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.765686 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq" Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.765697 4704 scope.go:117] "RemoveContainer" containerID="a13d04d0427c0c5756ce3b580f60304a7b403d840ab720912baa5abce421e382" Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.786920 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq"] Jan 22 16:47:09 crc kubenswrapper[4704]: I0122 16:47:09.793899 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b8fd6746-6j5cq"] Jan 22 16:47:11 crc kubenswrapper[4704]: I0122 16:47:11.658405 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" path="/var/lib/kubelet/pods/892df3b3-b506-4da6-8d5f-98b434e208fe/volumes" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.835938 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-mnt9b"] Jan 22 16:47:12 crc kubenswrapper[4704]: E0122 16:47:12.836294 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59835123-6708-4c93-96da-82bcddc141c7" containerName="operator" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.836311 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="59835123-6708-4c93-96da-82bcddc141c7" containerName="operator" Jan 22 16:47:12 crc kubenswrapper[4704]: E0122 16:47:12.836338 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" containerName="manager" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.836348 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" containerName="manager" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.837223 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="59835123-6708-4c93-96da-82bcddc141c7" containerName="operator" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.837248 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="892df3b3-b506-4da6-8d5f-98b434e208fe" containerName="manager" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.837709 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-mnt9b" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.840090 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-b7l42" Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.849654 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-mnt9b"] Jan 22 16:47:12 crc kubenswrapper[4704]: I0122 16:47:12.966991 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqs4n\" (UniqueName: \"kubernetes.io/projected/74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a-kube-api-access-vqs4n\") pod \"watcher-operator-index-mnt9b\" (UID: \"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a\") " pod="openstack-operators/watcher-operator-index-mnt9b" Jan 22 16:47:13 crc kubenswrapper[4704]: I0122 16:47:13.068437 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqs4n\" (UniqueName: \"kubernetes.io/projected/74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a-kube-api-access-vqs4n\") pod \"watcher-operator-index-mnt9b\" (UID: \"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a\") " pod="openstack-operators/watcher-operator-index-mnt9b" Jan 22 16:47:13 crc kubenswrapper[4704]: I0122 16:47:13.095936 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqs4n\" (UniqueName: \"kubernetes.io/projected/74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a-kube-api-access-vqs4n\") pod \"watcher-operator-index-mnt9b\" (UID: \"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a\") " pod="openstack-operators/watcher-operator-index-mnt9b" Jan 22 16:47:13 crc kubenswrapper[4704]: I0122 16:47:13.153919 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-mnt9b" Jan 22 16:47:13 crc kubenswrapper[4704]: I0122 16:47:13.733148 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-mnt9b"] Jan 22 16:47:13 crc kubenswrapper[4704]: I0122 16:47:13.795135 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-mnt9b" event={"ID":"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a","Type":"ContainerStarted","Data":"acae75c47166c9156308d76ecc8d70e305ace0abeea1f13d7258b61ec2f4fd73"} Jan 22 16:47:14 crc kubenswrapper[4704]: I0122 16:47:14.801653 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-mnt9b" event={"ID":"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a","Type":"ContainerStarted","Data":"dc34df274eea3d1e12e1ea912600a7999733a15df561ad7567d238f7251337f9"} Jan 22 16:47:16 crc kubenswrapper[4704]: I0122 16:47:16.822720 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-mnt9b" podStartSLOduration=4.663480502 podStartE2EDuration="4.822697991s" podCreationTimestamp="2026-01-22 16:47:12 +0000 UTC" firstStartedPulling="2026-01-22 16:47:13.751740036 +0000 UTC m=+1126.396286736" lastFinishedPulling="2026-01-22 16:47:13.910957525 +0000 UTC m=+1126.555504225" observedRunningTime="2026-01-22 16:47:14.819339685 +0000 UTC m=+1127.463886385" watchObservedRunningTime="2026-01-22 16:47:16.822697991 +0000 UTC m=+1129.467244691" Jan 22 16:47:16 crc kubenswrapper[4704]: I0122 16:47:16.828242 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-mnt9b"] Jan 22 16:47:16 crc kubenswrapper[4704]: I0122 16:47:16.828450 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-mnt9b" podUID="74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a" containerName="registry-server" containerID="cri-o://dc34df274eea3d1e12e1ea912600a7999733a15df561ad7567d238f7251337f9" gracePeriod=2 Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.431250 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-9jxrh"] Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.432129 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.439996 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-9jxrh"] Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.534969 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tcl\" (UniqueName: \"kubernetes.io/projected/ce88084a-02b7-45de-bdc8-629e934784ca-kube-api-access-v8tcl\") pod \"watcher-operator-index-9jxrh\" (UID: \"ce88084a-02b7-45de-bdc8-629e934784ca\") " pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.636467 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tcl\" (UniqueName: \"kubernetes.io/projected/ce88084a-02b7-45de-bdc8-629e934784ca-kube-api-access-v8tcl\") pod \"watcher-operator-index-9jxrh\" (UID: \"ce88084a-02b7-45de-bdc8-629e934784ca\") " pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.679674 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tcl\" (UniqueName: \"kubernetes.io/projected/ce88084a-02b7-45de-bdc8-629e934784ca-kube-api-access-v8tcl\") pod \"watcher-operator-index-9jxrh\" (UID: \"ce88084a-02b7-45de-bdc8-629e934784ca\") " pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.752595 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.828285 4704 generic.go:334] "Generic (PLEG): container finished" podID="74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a" containerID="dc34df274eea3d1e12e1ea912600a7999733a15df561ad7567d238f7251337f9" exitCode=0 Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.828331 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-mnt9b" event={"ID":"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a","Type":"ContainerDied","Data":"dc34df274eea3d1e12e1ea912600a7999733a15df561ad7567d238f7251337f9"} Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.828360 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-mnt9b" event={"ID":"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a","Type":"ContainerDied","Data":"acae75c47166c9156308d76ecc8d70e305ace0abeea1f13d7258b61ec2f4fd73"} Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.828373 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acae75c47166c9156308d76ecc8d70e305ace0abeea1f13d7258b61ec2f4fd73" Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.828772 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-mnt9b" Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.941517 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqs4n\" (UniqueName: \"kubernetes.io/projected/74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a-kube-api-access-vqs4n\") pod \"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a\" (UID: \"74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a\") " Jan 22 16:47:17 crc kubenswrapper[4704]: I0122 16:47:17.947237 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a-kube-api-access-vqs4n" (OuterVolumeSpecName: "kube-api-access-vqs4n") pod "74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a" (UID: "74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a"). InnerVolumeSpecName "kube-api-access-vqs4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.043087 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqs4n\" (UniqueName: \"kubernetes.io/projected/74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a-kube-api-access-vqs4n\") on node \"crc\" DevicePath \"\"" Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.180771 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-9jxrh"] Jan 22 16:47:18 crc kubenswrapper[4704]: W0122 16:47:18.188045 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce88084a_02b7_45de_bdc8_629e934784ca.slice/crio-b6b0ee4305d9b2203f2ab5b7cc0b3839d2e6a8200b1ef4fb762e40ddb6119c61 WatchSource:0}: Error finding container b6b0ee4305d9b2203f2ab5b7cc0b3839d2e6a8200b1ef4fb762e40ddb6119c61: Status 404 returned error can't find the container with id b6b0ee4305d9b2203f2ab5b7cc0b3839d2e6a8200b1ef4fb762e40ddb6119c61 Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.838555 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-mnt9b" Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.838625 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-9jxrh" event={"ID":"ce88084a-02b7-45de-bdc8-629e934784ca","Type":"ContainerStarted","Data":"f11b8800073545f96d0f502b9d267aacbabab3d0717ba55efcf2293f3cda3c58"} Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.839146 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-9jxrh" event={"ID":"ce88084a-02b7-45de-bdc8-629e934784ca","Type":"ContainerStarted","Data":"b6b0ee4305d9b2203f2ab5b7cc0b3839d2e6a8200b1ef4fb762e40ddb6119c61"} Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.858116 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-9jxrh" podStartSLOduration=1.643989489 podStartE2EDuration="1.858095109s" podCreationTimestamp="2026-01-22 16:47:17 +0000 UTC" firstStartedPulling="2026-01-22 16:47:18.191067245 +0000 UTC m=+1130.835613945" lastFinishedPulling="2026-01-22 16:47:18.405172865 +0000 UTC m=+1131.049719565" observedRunningTime="2026-01-22 16:47:18.855161335 +0000 UTC m=+1131.499708045" watchObservedRunningTime="2026-01-22 16:47:18.858095109 +0000 UTC m=+1131.502641809" Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.880933 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-mnt9b"] Jan 22 16:47:18 crc kubenswrapper[4704]: I0122 16:47:18.887748 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-mnt9b"] Jan 22 16:47:19 crc kubenswrapper[4704]: I0122 16:47:19.644515 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a" path="/var/lib/kubelet/pods/74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a/volumes" Jan 22 16:47:27 crc kubenswrapper[4704]: I0122 16:47:27.753565 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:27 crc kubenswrapper[4704]: I0122 16:47:27.754543 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:27 crc kubenswrapper[4704]: I0122 16:47:27.790201 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:27 crc kubenswrapper[4704]: I0122 16:47:27.944480 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-9jxrh" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.678131 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g"] Jan 22 16:47:30 crc kubenswrapper[4704]: E0122 16:47:30.678704 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a" containerName="registry-server" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.678716 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a" containerName="registry-server" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.678898 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f9b3ef-d1b9-46e5-8fb8-9992c9a7ab1a" containerName="registry-server" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.679895 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.683157 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-h9zdt" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.691065 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g"] Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.725213 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-util\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.725290 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-bundle\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.725365 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2j6\" (UniqueName: \"kubernetes.io/projected/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-kube-api-access-ch2j6\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.827231 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-bundle\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.827378 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch2j6\" (UniqueName: \"kubernetes.io/projected/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-kube-api-access-ch2j6\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.827438 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-util\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.827866 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-bundle\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.828166 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-util\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.846040 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch2j6\" (UniqueName: \"kubernetes.io/projected/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-kube-api-access-ch2j6\") pod \"2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:30 crc kubenswrapper[4704]: I0122 16:47:30.998671 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:31 crc kubenswrapper[4704]: I0122 16:47:31.451980 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g"] Jan 22 16:47:31 crc kubenswrapper[4704]: I0122 16:47:31.945139 4704 generic.go:334] "Generic (PLEG): container finished" podID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerID="3d0a4b0a0d3de9f40227a968120df988c223a3ba06abac35a6c897aa9453f40d" exitCode=0 Jan 22 16:47:31 crc kubenswrapper[4704]: I0122 16:47:31.945341 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" event={"ID":"a94e2442-d107-46dc-98fe-8bfaeb91b0e6","Type":"ContainerDied","Data":"3d0a4b0a0d3de9f40227a968120df988c223a3ba06abac35a6c897aa9453f40d"} Jan 22 16:47:31 crc kubenswrapper[4704]: I0122 16:47:31.945684 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" event={"ID":"a94e2442-d107-46dc-98fe-8bfaeb91b0e6","Type":"ContainerStarted","Data":"32eb19f21514667d3f995f15a3714d72cfdc0c26a16807fc2d36d690146e877b"} Jan 22 16:47:32 crc kubenswrapper[4704]: I0122 16:47:32.957076 4704 generic.go:334] "Generic (PLEG): container finished" podID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerID="25e7e1cd05fde82abc92f222bd1147772d4ec8d3d039c3684f5b04c766d01c76" exitCode=0 Jan 22 16:47:32 crc kubenswrapper[4704]: I0122 16:47:32.957140 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" event={"ID":"a94e2442-d107-46dc-98fe-8bfaeb91b0e6","Type":"ContainerDied","Data":"25e7e1cd05fde82abc92f222bd1147772d4ec8d3d039c3684f5b04c766d01c76"} Jan 22 16:47:33 crc kubenswrapper[4704]: I0122 16:47:33.970910 4704 generic.go:334] "Generic (PLEG): container finished" podID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerID="de3b6ddf7ad99eba06f4248b29f03dc359c621123c991f96e0e9c2606943393c" exitCode=0 Jan 22 16:47:33 crc kubenswrapper[4704]: I0122 16:47:33.971013 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" event={"ID":"a94e2442-d107-46dc-98fe-8bfaeb91b0e6","Type":"ContainerDied","Data":"de3b6ddf7ad99eba06f4248b29f03dc359c621123c991f96e0e9c2606943393c"} Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.342128 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.396132 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-util\") pod \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.396397 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-bundle\") pod \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.396610 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch2j6\" (UniqueName: \"kubernetes.io/projected/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-kube-api-access-ch2j6\") pod \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\" (UID: \"a94e2442-d107-46dc-98fe-8bfaeb91b0e6\") " Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.399445 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-bundle" (OuterVolumeSpecName: "bundle") pod "a94e2442-d107-46dc-98fe-8bfaeb91b0e6" (UID: "a94e2442-d107-46dc-98fe-8bfaeb91b0e6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.422988 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-kube-api-access-ch2j6" (OuterVolumeSpecName: "kube-api-access-ch2j6") pod "a94e2442-d107-46dc-98fe-8bfaeb91b0e6" (UID: "a94e2442-d107-46dc-98fe-8bfaeb91b0e6"). InnerVolumeSpecName "kube-api-access-ch2j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.428930 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-util" (OuterVolumeSpecName: "util") pod "a94e2442-d107-46dc-98fe-8bfaeb91b0e6" (UID: "a94e2442-d107-46dc-98fe-8bfaeb91b0e6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.499503 4704 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.499723 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch2j6\" (UniqueName: \"kubernetes.io/projected/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-kube-api-access-ch2j6\") on node \"crc\" DevicePath \"\"" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.499782 4704 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a94e2442-d107-46dc-98fe-8bfaeb91b0e6-util\") on node \"crc\" DevicePath \"\"" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.985660 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" event={"ID":"a94e2442-d107-46dc-98fe-8bfaeb91b0e6","Type":"ContainerDied","Data":"32eb19f21514667d3f995f15a3714d72cfdc0c26a16807fc2d36d690146e877b"} Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.985693 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32eb19f21514667d3f995f15a3714d72cfdc0c26a16807fc2d36d690146e877b" Jan 22 16:47:35 crc kubenswrapper[4704]: I0122 16:47:35.985743 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.272470 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m"] Jan 22 16:47:44 crc kubenswrapper[4704]: E0122 16:47:44.273065 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerName="extract" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.273079 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerName="extract" Jan 22 16:47:44 crc kubenswrapper[4704]: E0122 16:47:44.273103 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerName="util" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.273111 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerName="util" Jan 22 16:47:44 crc kubenswrapper[4704]: E0122 16:47:44.273132 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerName="pull" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.273140 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerName="pull" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.273279 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94e2442-d107-46dc-98fe-8bfaeb91b0e6" containerName="extract" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.273757 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.276463 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.276521 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dfwqw" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.282987 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m"] Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.326391 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-webhook-cert\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.326528 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vww2g\" (UniqueName: \"kubernetes.io/projected/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-kube-api-access-vww2g\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.326586 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-apiservice-cert\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.428025 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-webhook-cert\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.428567 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vww2g\" (UniqueName: \"kubernetes.io/projected/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-kube-api-access-vww2g\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.428623 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-apiservice-cert\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.433868 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-webhook-cert\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.434429 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-apiservice-cert\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.445273 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vww2g\" (UniqueName: \"kubernetes.io/projected/99b0e241-3467-43d4-8c17-f1b95d4ea8c3-kube-api-access-vww2g\") pod \"watcher-operator-controller-manager-d9d597c89-9ck8m\" (UID: \"99b0e241-3467-43d4-8c17-f1b95d4ea8c3\") " pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:44 crc kubenswrapper[4704]: I0122 16:47:44.596366 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:45 crc kubenswrapper[4704]: I0122 16:47:45.166201 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m"] Jan 22 16:47:46 crc kubenswrapper[4704]: I0122 16:47:46.062008 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" event={"ID":"99b0e241-3467-43d4-8c17-f1b95d4ea8c3","Type":"ContainerStarted","Data":"a5bf3b9900e98ae61044867a7fc569f4e39ef831ed50a0b800dda8ec663e431c"} Jan 22 16:47:46 crc kubenswrapper[4704]: I0122 16:47:46.062371 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" event={"ID":"99b0e241-3467-43d4-8c17-f1b95d4ea8c3","Type":"ContainerStarted","Data":"2454ba532302e7aac7ac2ff1ed8ca7040b74f9a2697e2a4f9c2eaa2e43303a1e"} Jan 22 16:47:46 crc kubenswrapper[4704]: I0122 16:47:46.062940 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:47:46 crc kubenswrapper[4704]: I0122 16:47:46.081473 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" podStartSLOduration=2.081448789 podStartE2EDuration="2.081448789s" podCreationTimestamp="2026-01-22 16:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:47:46.077883338 +0000 UTC m=+1158.722430068" watchObservedRunningTime="2026-01-22 16:47:46.081448789 +0000 UTC m=+1158.725995489" Jan 22 16:47:54 crc kubenswrapper[4704]: I0122 16:47:54.601099 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-d9d597c89-9ck8m" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.728119 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.729834 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.732030 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.732306 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.732556 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.732579 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.733913 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.733984 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-2cqw8" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.734029 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.735739 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.736838 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.749357 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.857728 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b171faa-1b29-41f7-9582-8e8003603f75-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.857852 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.857881 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b171faa-1b29-41f7-9582-8e8003603f75-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.857919 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.857964 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.858008 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d85w\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-kube-api-access-6d85w\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.858073 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.858098 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.858114 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.858135 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.858153 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.958839 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d85w\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-kube-api-access-6d85w\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.958898 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.958926 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.958950 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.958975 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.958990 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.959032 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b171faa-1b29-41f7-9582-8e8003603f75-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.959121 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.959151 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b171faa-1b29-41f7-9582-8e8003603f75-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.959191 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.959218 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.959692 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.960531 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.960900 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.960908 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.961590 4704 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.961629 4704 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/adabff4f8dd8a64a088a1f4ed0ce823865e3c76862bbd7708a8dd9f582697a7b/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.961877 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b171faa-1b29-41f7-9582-8e8003603f75-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.965077 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b171faa-1b29-41f7-9582-8e8003603f75-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.965120 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.965143 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b171faa-1b29-41f7-9582-8e8003603f75-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.965723 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.985988 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e494959a-70c7-4075-8ecd-6f14933e3e75\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:06 crc kubenswrapper[4704]: I0122 16:48:06.988256 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d85w\" (UniqueName: \"kubernetes.io/projected/1b171faa-1b29-41f7-9582-8e8003603f75-kube-api-access-6d85w\") pod \"rabbitmq-notifications-server-0\" (UID: \"1b171faa-1b29-41f7-9582-8e8003603f75\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.051045 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.516269 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 22 16:48:07 crc kubenswrapper[4704]: W0122 16:48:07.518407 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b171faa_1b29_41f7_9582_8e8003603f75.slice/crio-5e884b866c105b8ad0ea17e1705eaefc3ecbfc968bc1d3e99cc02277e48cb6dd WatchSource:0}: Error finding container 5e884b866c105b8ad0ea17e1705eaefc3ecbfc968bc1d3e99cc02277e48cb6dd: Status 404 returned error can't find the container with id 5e884b866c105b8ad0ea17e1705eaefc3ecbfc968bc1d3e99cc02277e48cb6dd Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.536926 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.542451 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.546143 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.546352 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.546473 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.546671 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.546960 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.547182 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.549256 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.551192 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-77k6z" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672543 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672587 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhw9\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-kube-api-access-jxhw9\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672660 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672694 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672732 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672753 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672772 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672832 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.672987 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.673038 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.673214 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.774707 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.774787 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxhw9\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-kube-api-access-jxhw9\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.774851 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.774874 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.774962 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.775003 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.775046 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.775065 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.775291 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.775366 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.775394 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.776106 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.776314 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.776335 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.777001 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.777104 4704 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.777128 4704 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f069d5c5080ce8731bd3a21f9bb1b5923ac116e9db1de6049d0a4df19db5a6b3/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.777869 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.784771 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.785216 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.786297 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.788481 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.803182 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxhw9\" (UniqueName: \"kubernetes.io/projected/e2ef8e1a-f771-48a2-a61b-866950a3f0a0-kube-api-access-jxhw9\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.834679 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a57d7ac9-0652-4015-87df-89fb6245fe9b\") pod \"rabbitmq-server-0\" (UID: \"e2ef8e1a-f771-48a2-a61b-866950a3f0a0\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:07 crc kubenswrapper[4704]: I0122 16:48:07.889542 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.236897 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1b171faa-1b29-41f7-9582-8e8003603f75","Type":"ContainerStarted","Data":"5e884b866c105b8ad0ea17e1705eaefc3ecbfc968bc1d3e99cc02277e48cb6dd"} Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.543307 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 22 16:48:08 crc kubenswrapper[4704]: W0122 16:48:08.565029 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2ef8e1a_f771_48a2_a61b_866950a3f0a0.slice/crio-be6c08be6d98c64a74752fa43e4e9bdbd6d812cc508487ff9db9050aa6d349a8 WatchSource:0}: Error finding container be6c08be6d98c64a74752fa43e4e9bdbd6d812cc508487ff9db9050aa6d349a8: Status 404 returned error can't find the container with id be6c08be6d98c64a74752fa43e4e9bdbd6d812cc508487ff9db9050aa6d349a8 Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.963953 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.966815 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.970963 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-2w4gh" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.971020 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.971429 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.971575 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.972934 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.977073 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996155 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc64j\" (UniqueName: \"kubernetes.io/projected/9ba981e4-1f66-452c-b481-f482feda87b3-kube-api-access-kc64j\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996199 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996222 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba981e4-1f66-452c-b481-f482feda87b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996247 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a544ff94-570d-4c88-9cd9-29bb70752410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a544ff94-570d-4c88-9cd9-29bb70752410\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996271 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996302 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996319 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ba981e4-1f66-452c-b481-f482feda87b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:08 crc kubenswrapper[4704]: I0122 16:48:08.996345 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ba981e4-1f66-452c-b481-f482feda87b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097539 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc64j\" (UniqueName: \"kubernetes.io/projected/9ba981e4-1f66-452c-b481-f482feda87b3-kube-api-access-kc64j\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097594 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097631 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba981e4-1f66-452c-b481-f482feda87b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097668 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a544ff94-570d-4c88-9cd9-29bb70752410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a544ff94-570d-4c88-9cd9-29bb70752410\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097701 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097746 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097772 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ba981e4-1f66-452c-b481-f482feda87b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.097890 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ba981e4-1f66-452c-b481-f482feda87b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.098404 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9ba981e4-1f66-452c-b481-f482feda87b3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.099397 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-kolla-config\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.103477 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-config-data-default\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.104021 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ba981e4-1f66-452c-b481-f482feda87b3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.107642 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba981e4-1f66-452c-b481-f482feda87b3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.109238 4704 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.109271 4704 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a544ff94-570d-4c88-9cd9-29bb70752410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a544ff94-570d-4c88-9cd9-29bb70752410\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24d0dec3ccc499bef88354cdf0b79f4fe829dee480c2e8ff9a5a41e0a4fc5020/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.109731 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ba981e4-1f66-452c-b481-f482feda87b3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.114484 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc64j\" (UniqueName: \"kubernetes.io/projected/9ba981e4-1f66-452c-b481-f482feda87b3-kube-api-access-kc64j\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.159047 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a544ff94-570d-4c88-9cd9-29bb70752410\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a544ff94-570d-4c88-9cd9-29bb70752410\") pod \"openstack-galera-0\" (UID: \"9ba981e4-1f66-452c-b481-f482feda87b3\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.253739 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"e2ef8e1a-f771-48a2-a61b-866950a3f0a0","Type":"ContainerStarted","Data":"be6c08be6d98c64a74752fa43e4e9bdbd6d812cc508487ff9db9050aa6d349a8"} Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.302806 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.347582 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.348541 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.350200 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.351187 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.351323 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-h4svk" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.356675 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.503768 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmb9b\" (UniqueName: \"kubernetes.io/projected/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kube-api-access-dmb9b\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.504230 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-config-data\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.504304 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.504441 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kolla-config\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.504545 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.606851 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.606916 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmb9b\" (UniqueName: \"kubernetes.io/projected/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kube-api-access-dmb9b\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.606968 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-config-data\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.607011 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.607053 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kolla-config\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.609204 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kolla-config\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.610365 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-config-data\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.615102 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.629353 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmb9b\" (UniqueName: \"kubernetes.io/projected/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kube-api-access-dmb9b\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.629734 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.725120 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.726036 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.728902 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-k5jv5" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.731475 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.736191 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.827067 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2qcz\" (UniqueName: \"kubernetes.io/projected/29d5ab67-1ca3-482c-987e-1f299f728372-kube-api-access-s2qcz\") pod \"kube-state-metrics-0\" (UID: \"29d5ab67-1ca3-482c-987e-1f299f728372\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.853787 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 22 16:48:09 crc kubenswrapper[4704]: W0122 16:48:09.865682 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ba981e4_1f66_452c_b481_f482feda87b3.slice/crio-554251208b89ae59c8d1b522957850dd689031fad4c3985da069194da50f5da6 WatchSource:0}: Error finding container 554251208b89ae59c8d1b522957850dd689031fad4c3985da069194da50f5da6: Status 404 returned error can't find the container with id 554251208b89ae59c8d1b522957850dd689031fad4c3985da069194da50f5da6 Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.928028 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2qcz\" (UniqueName: \"kubernetes.io/projected/29d5ab67-1ca3-482c-987e-1f299f728372-kube-api-access-s2qcz\") pod \"kube-state-metrics-0\" (UID: \"29d5ab67-1ca3-482c-987e-1f299f728372\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:48:09 crc kubenswrapper[4704]: I0122 16:48:09.955661 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2qcz\" (UniqueName: \"kubernetes.io/projected/29d5ab67-1ca3-482c-987e-1f299f728372-kube-api-access-s2qcz\") pod \"kube-state-metrics-0\" (UID: \"29d5ab67-1ca3-482c-987e-1f299f728372\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.047166 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.275218 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"9ba981e4-1f66-452c-b481-f482feda87b3","Type":"ContainerStarted","Data":"554251208b89ae59c8d1b522957850dd689031fad4c3985da069194da50f5da6"} Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.314604 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:48:10 crc kubenswrapper[4704]: W0122 16:48:10.330414 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ad15c4a_ba0c_4ea1_804f_63eb7e4c96c8.slice/crio-c0bc8bba451a33c1946d6842a0b2edb118885d7290a7fff46ce36aea5809568f WatchSource:0}: Error finding container c0bc8bba451a33c1946d6842a0b2edb118885d7290a7fff46ce36aea5809568f: Status 404 returned error can't find the container with id c0bc8bba451a33c1946d6842a0b2edb118885d7290a7fff46ce36aea5809568f Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.425759 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.427337 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.432085 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.432175 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.432322 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.449464 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-6gcvl" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.449849 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.457589 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.603421 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.603515 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.603587 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrqd\" (UniqueName: \"kubernetes.io/projected/48dfb2d3-192d-4033-afcf-1abfb1a31f59-kube-api-access-wkrqd\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.603651 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48dfb2d3-192d-4033-afcf-1abfb1a31f59-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.603839 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48dfb2d3-192d-4033-afcf-1abfb1a31f59-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.603867 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.604027 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/48dfb2d3-192d-4033-afcf-1abfb1a31f59-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.647211 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:48:10 crc kubenswrapper[4704]: W0122 16:48:10.667938 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29d5ab67_1ca3_482c_987e_1f299f728372.slice/crio-c94ee0dbaf41470dda083a3ba8be2c23828f111d9da012155cd0e87f33b89f51 WatchSource:0}: Error finding container c94ee0dbaf41470dda083a3ba8be2c23828f111d9da012155cd0e87f33b89f51: Status 404 returned error can't find the container with id c94ee0dbaf41470dda083a3ba8be2c23828f111d9da012155cd0e87f33b89f51 Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.712636 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48dfb2d3-192d-4033-afcf-1abfb1a31f59-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.712688 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.713348 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/48dfb2d3-192d-4033-afcf-1abfb1a31f59-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.713410 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.713453 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.713541 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkrqd\" (UniqueName: \"kubernetes.io/projected/48dfb2d3-192d-4033-afcf-1abfb1a31f59-kube-api-access-wkrqd\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.713573 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48dfb2d3-192d-4033-afcf-1abfb1a31f59-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.714039 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/48dfb2d3-192d-4033-afcf-1abfb1a31f59-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.719266 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.729779 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/48dfb2d3-192d-4033-afcf-1abfb1a31f59-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.730281 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.730451 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/48dfb2d3-192d-4033-afcf-1abfb1a31f59-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.730844 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/48dfb2d3-192d-4033-afcf-1abfb1a31f59-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.738031 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkrqd\" (UniqueName: \"kubernetes.io/projected/48dfb2d3-192d-4033-afcf-1abfb1a31f59-kube-api-access-wkrqd\") pod \"alertmanager-metric-storage-0\" (UID: \"48dfb2d3-192d-4033-afcf-1abfb1a31f59\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.765439 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.852234 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl"] Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.853235 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.861207 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.863092 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-xqws9" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.863841 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl"] Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.964816 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.967278 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.973645 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.973894 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.974096 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.974277 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.974430 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.974553 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.974756 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-jm6cw" Jan 22 16:48:10 crc kubenswrapper[4704]: I0122 16:48:10.974906 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.000611 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.036220 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1514213-cdd4-4219-aefc-7d8b37aa38c4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-nsxrl\" (UID: \"b1514213-cdd4-4219-aefc-7d8b37aa38c4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.036282 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwlt\" (UniqueName: \"kubernetes.io/projected/b1514213-cdd4-4219-aefc-7d8b37aa38c4-kube-api-access-zkwlt\") pod \"observability-ui-dashboards-66cbf594b5-nsxrl\" (UID: \"b1514213-cdd4-4219-aefc-7d8b37aa38c4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137512 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137556 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137605 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137638 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1514213-cdd4-4219-aefc-7d8b37aa38c4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-nsxrl\" (UID: \"b1514213-cdd4-4219-aefc-7d8b37aa38c4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137673 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwlt\" (UniqueName: \"kubernetes.io/projected/b1514213-cdd4-4219-aefc-7d8b37aa38c4-kube-api-access-zkwlt\") pod \"observability-ui-dashboards-66cbf594b5-nsxrl\" (UID: \"b1514213-cdd4-4219-aefc-7d8b37aa38c4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137698 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137749 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137769 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137807 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksrvr\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-kube-api-access-ksrvr\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137838 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137868 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3117e69-0a16-4403-a4a5-c35e78f711e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.137896 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: E0122 16:48:11.138178 4704 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 22 16:48:11 crc kubenswrapper[4704]: E0122 16:48:11.138219 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1514213-cdd4-4219-aefc-7d8b37aa38c4-serving-cert podName:b1514213-cdd4-4219-aefc-7d8b37aa38c4 nodeName:}" failed. No retries permitted until 2026-01-22 16:48:11.638205361 +0000 UTC m=+1184.282752061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b1514213-cdd4-4219-aefc-7d8b37aa38c4-serving-cert") pod "observability-ui-dashboards-66cbf594b5-nsxrl" (UID: "b1514213-cdd4-4219-aefc-7d8b37aa38c4") : secret "observability-ui-dashboards" not found Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.214769 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwlt\" (UniqueName: \"kubernetes.io/projected/b1514213-cdd4-4219-aefc-7d8b37aa38c4-kube-api-access-zkwlt\") pod \"observability-ui-dashboards-66cbf594b5-nsxrl\" (UID: \"b1514213-cdd4-4219-aefc-7d8b37aa38c4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239121 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239165 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239197 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksrvr\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-kube-api-access-ksrvr\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239224 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239248 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3117e69-0a16-4403-a4a5-c35e78f711e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239273 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239292 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239311 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239330 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.239396 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.244495 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.246189 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.268196 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.272935 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.273075 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.279867 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cb7c59d6b-qjbp6"] Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.281121 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.281540 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.303147 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3117e69-0a16-4403-a4a5-c35e78f711e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.306057 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb7c59d6b-qjbp6"] Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.334969 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.350161 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksrvr\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-kube-api-access-ksrvr\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.373242 4704 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.373283 4704 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f6cee35d84fb12089ddcd0f9d057c4fa69f92d7ca02888ccd6b2ec4e6b69478/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.383518 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"29d5ab67-1ca3-482c-987e-1f299f728372","Type":"ContainerStarted","Data":"c94ee0dbaf41470dda083a3ba8be2c23828f111d9da012155cd0e87f33b89f51"} Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.445552 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-service-ca\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.445631 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-config\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.445651 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-trusted-ca-bundle\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.445674 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpl6\" (UniqueName: \"kubernetes.io/projected/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-kube-api-access-hxpl6\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.445696 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-serving-cert\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.445715 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-oauth-serving-cert\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.445781 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-oauth-config\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.473990 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8","Type":"ContainerStarted","Data":"c0bc8bba451a33c1946d6842a0b2edb118885d7290a7fff46ce36aea5809568f"} Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.558586 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-service-ca\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.558644 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-config\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.558659 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-trusted-ca-bundle\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.558679 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpl6\" (UniqueName: \"kubernetes.io/projected/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-kube-api-access-hxpl6\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.558698 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-serving-cert\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.558723 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-oauth-serving-cert\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.558785 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-oauth-config\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.559880 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-service-ca\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.560210 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-oauth-serving-cert\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.560275 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-trusted-ca-bundle\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.565401 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-serving-cert\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.569428 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-config\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.570368 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-console-oauth-config\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.596221 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpl6\" (UniqueName: \"kubernetes.io/projected/e59519e4-b1d2-49aa-a0a9-7ed72078ebe6-kube-api-access-hxpl6\") pod \"console-5cb7c59d6b-qjbp6\" (UID: \"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6\") " pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.609286 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.613720 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.660762 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1514213-cdd4-4219-aefc-7d8b37aa38c4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-nsxrl\" (UID: \"b1514213-cdd4-4219-aefc-7d8b37aa38c4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.663693 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1514213-cdd4-4219-aefc-7d8b37aa38c4-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-nsxrl\" (UID: \"b1514213-cdd4-4219-aefc-7d8b37aa38c4\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: W0122 16:48:11.686206 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48dfb2d3_192d_4033_afcf_1abfb1a31f59.slice/crio-85d6b3a6e29760f68506f907476caa763eac0dc9db965b092ec3fbbc004bfbd3 WatchSource:0}: Error finding container 85d6b3a6e29760f68506f907476caa763eac0dc9db965b092ec3fbbc004bfbd3: Status 404 returned error can't find the container with id 85d6b3a6e29760f68506f907476caa763eac0dc9db965b092ec3fbbc004bfbd3 Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.698363 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.796131 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" Jan 22 16:48:11 crc kubenswrapper[4704]: I0122 16:48:11.888455 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:12 crc kubenswrapper[4704]: I0122 16:48:12.482144 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"48dfb2d3-192d-4033-afcf-1abfb1a31f59","Type":"ContainerStarted","Data":"85d6b3a6e29760f68506f907476caa763eac0dc9db965b092ec3fbbc004bfbd3"} Jan 22 16:48:13 crc kubenswrapper[4704]: I0122 16:48:13.017315 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:48:13 crc kubenswrapper[4704]: W0122 16:48:13.607565 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3117e69_0a16_4403_a4a5_c35e78f711e6.slice/crio-f7fd926b9d2a5a1e7c74261c12cb03397263c014920a5b56434bc93a7b3843a6 WatchSource:0}: Error finding container f7fd926b9d2a5a1e7c74261c12cb03397263c014920a5b56434bc93a7b3843a6: Status 404 returned error can't find the container with id f7fd926b9d2a5a1e7c74261c12cb03397263c014920a5b56434bc93a7b3843a6 Jan 22 16:48:14 crc kubenswrapper[4704]: I0122 16:48:14.022110 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb7c59d6b-qjbp6"] Jan 22 16:48:14 crc kubenswrapper[4704]: I0122 16:48:14.499926 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerStarted","Data":"f7fd926b9d2a5a1e7c74261c12cb03397263c014920a5b56434bc93a7b3843a6"} Jan 22 16:48:17 crc kubenswrapper[4704]: I0122 16:48:17.377212 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl"] Jan 22 16:48:17 crc kubenswrapper[4704]: I0122 16:48:17.523123 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb7c59d6b-qjbp6" event={"ID":"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6","Type":"ContainerStarted","Data":"32948094f70c37830fc25ba46fe29c143587cd04f46e4b30c3445345878efe09"} Jan 22 16:48:20 crc kubenswrapper[4704]: I0122 16:48:20.552650 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" event={"ID":"b1514213-cdd4-4219-aefc-7d8b37aa38c4","Type":"ContainerStarted","Data":"1bd3c143840429a572818a77e36f99fb5cdc6faf4439c89c3cf2845159d318f5"} Jan 22 16:48:23 crc kubenswrapper[4704]: E0122 16:48:23.781960 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 22 16:48:23 crc kubenswrapper[4704]: E0122 16:48:23.782507 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n655h58fh64bh5bfh689h6bh68h67dh9bh74h7fh64dh5b8h5dh67dh5cfh679h675h657hf9h57chf8h594h6dh555hbdh9bh85h5fh5d6h669h594q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmb9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_watcher-kuttl-default(9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:48:23 crc kubenswrapper[4704]: E0122 16:48:23.785834 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/memcached-0" podUID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" Jan 22 16:48:24 crc kubenswrapper[4704]: I0122 16:48:24.583480 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb7c59d6b-qjbp6" event={"ID":"e59519e4-b1d2-49aa-a0a9-7ed72078ebe6","Type":"ContainerStarted","Data":"8c8dc3c6ab66365946ea2efcef658e45af896fac86c566102217a8f4a78c59ae"} Jan 22 16:48:24 crc kubenswrapper[4704]: E0122 16:48:24.584738 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="watcher-kuttl-default/memcached-0" podUID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" Jan 22 16:48:24 crc kubenswrapper[4704]: I0122 16:48:24.631731 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cb7c59d6b-qjbp6" podStartSLOduration=13.631712322 podStartE2EDuration="13.631712322s" podCreationTimestamp="2026-01-22 16:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:48:24.623201821 +0000 UTC m=+1197.267748541" watchObservedRunningTime="2026-01-22 16:48:24.631712322 +0000 UTC m=+1197.276259022" Jan 22 16:48:24 crc kubenswrapper[4704]: E0122 16:48:24.725202 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 22 16:48:24 crc kubenswrapper[4704]: E0122 16:48:24.725256 4704 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 22 16:48:24 crc kubenswrapper[4704]: E0122 16:48:24.725661 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=watcher-kuttl-default],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2qcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_watcher-kuttl-default(29d5ab67-1ca3-482c-987e-1f299f728372): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 16:48:24 crc kubenswrapper[4704]: E0122 16:48:24.726806 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="29d5ab67-1ca3-482c-987e-1f299f728372" Jan 22 16:48:25 crc kubenswrapper[4704]: E0122 16:48:25.594347 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="29d5ab67-1ca3-482c-987e-1f299f728372" Jan 22 16:48:26 crc kubenswrapper[4704]: I0122 16:48:26.599765 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" event={"ID":"b1514213-cdd4-4219-aefc-7d8b37aa38c4","Type":"ContainerStarted","Data":"6957afd2f0fecb96475ee0bdde7dc8d44edc7b549f84fcaa2c4e1e0adbe68e5c"} Jan 22 16:48:26 crc kubenswrapper[4704]: I0122 16:48:26.601190 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"9ba981e4-1f66-452c-b481-f482feda87b3","Type":"ContainerStarted","Data":"f36c8e2d60977b44a14a24667386dffe479e57a74a75e168f7f1ff68cbfc2862"} Jan 22 16:48:26 crc kubenswrapper[4704]: I0122 16:48:26.623887 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-nsxrl" podStartSLOduration=11.516064624 podStartE2EDuration="16.623859044s" podCreationTimestamp="2026-01-22 16:48:10 +0000 UTC" firstStartedPulling="2026-01-22 16:48:20.151244678 +0000 UTC m=+1192.795791378" lastFinishedPulling="2026-01-22 16:48:25.259039098 +0000 UTC m=+1197.903585798" observedRunningTime="2026-01-22 16:48:26.61278182 +0000 UTC m=+1199.257328520" watchObservedRunningTime="2026-01-22 16:48:26.623859044 +0000 UTC m=+1199.268405784" Jan 22 16:48:27 crc kubenswrapper[4704]: I0122 16:48:27.611944 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"e2ef8e1a-f771-48a2-a61b-866950a3f0a0","Type":"ContainerStarted","Data":"fe6c65fa30bcdd8d9393fda49e8704794cd648111b4d02191c3b3705189cd9bf"} Jan 22 16:48:27 crc kubenswrapper[4704]: I0122 16:48:27.614074 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1b171faa-1b29-41f7-9582-8e8003603f75","Type":"ContainerStarted","Data":"4645b94dd89746cbb4c52287082d185afbc7208c37da980b73e1265dfdb97163"} Jan 22 16:48:28 crc kubenswrapper[4704]: I0122 16:48:28.620454 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerStarted","Data":"4b6b47ea6989e53f260594ddd9d48694140a8288ea384b7bbd60c9570eefc051"} Jan 22 16:48:28 crc kubenswrapper[4704]: I0122 16:48:28.621765 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"48dfb2d3-192d-4033-afcf-1abfb1a31f59","Type":"ContainerStarted","Data":"28790be91cfbb6ea3b03451d56f2689f92d2110f0e0009ad4589014249857b54"} Jan 22 16:48:30 crc kubenswrapper[4704]: I0122 16:48:30.639003 4704 generic.go:334] "Generic (PLEG): container finished" podID="9ba981e4-1f66-452c-b481-f482feda87b3" containerID="f36c8e2d60977b44a14a24667386dffe479e57a74a75e168f7f1ff68cbfc2862" exitCode=0 Jan 22 16:48:30 crc kubenswrapper[4704]: I0122 16:48:30.639101 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"9ba981e4-1f66-452c-b481-f482feda87b3","Type":"ContainerDied","Data":"f36c8e2d60977b44a14a24667386dffe479e57a74a75e168f7f1ff68cbfc2862"} Jan 22 16:48:31 crc kubenswrapper[4704]: I0122 16:48:31.663746 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"9ba981e4-1f66-452c-b481-f482feda87b3","Type":"ContainerStarted","Data":"a5d0ac86ea1e621b3b427e8a7fa79ef958760adb35e18b85933ff8301e432a5b"} Jan 22 16:48:31 crc kubenswrapper[4704]: I0122 16:48:31.689639 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=10.489853641 podStartE2EDuration="24.689615925s" podCreationTimestamp="2026-01-22 16:48:07 +0000 UTC" firstStartedPulling="2026-01-22 16:48:09.874299576 +0000 UTC m=+1182.518846276" lastFinishedPulling="2026-01-22 16:48:24.07406186 +0000 UTC m=+1196.718608560" observedRunningTime="2026-01-22 16:48:31.682302958 +0000 UTC m=+1204.326849648" watchObservedRunningTime="2026-01-22 16:48:31.689615925 +0000 UTC m=+1204.334162645" Jan 22 16:48:31 crc kubenswrapper[4704]: I0122 16:48:31.698445 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:31 crc kubenswrapper[4704]: I0122 16:48:31.698494 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:31 crc kubenswrapper[4704]: I0122 16:48:31.709091 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:32 crc kubenswrapper[4704]: I0122 16:48:32.721395 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cb7c59d6b-qjbp6" Jan 22 16:48:32 crc kubenswrapper[4704]: I0122 16:48:32.784460 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76479b6979-x64kd"] Jan 22 16:48:34 crc kubenswrapper[4704]: E0122 16:48:34.588956 4704 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3117e69_0a16_4403_a4a5_c35e78f711e6.slice/crio-4b6b47ea6989e53f260594ddd9d48694140a8288ea384b7bbd60c9570eefc051.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3117e69_0a16_4403_a4a5_c35e78f711e6.slice/crio-conmon-4b6b47ea6989e53f260594ddd9d48694140a8288ea384b7bbd60c9570eefc051.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:48:34 crc kubenswrapper[4704]: I0122 16:48:34.730433 4704 generic.go:334] "Generic (PLEG): container finished" podID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerID="4b6b47ea6989e53f260594ddd9d48694140a8288ea384b7bbd60c9570eefc051" exitCode=0 Jan 22 16:48:34 crc kubenswrapper[4704]: I0122 16:48:34.730507 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerDied","Data":"4b6b47ea6989e53f260594ddd9d48694140a8288ea384b7bbd60c9570eefc051"} Jan 22 16:48:35 crc kubenswrapper[4704]: I0122 16:48:35.738846 4704 generic.go:334] "Generic (PLEG): container finished" podID="48dfb2d3-192d-4033-afcf-1abfb1a31f59" containerID="28790be91cfbb6ea3b03451d56f2689f92d2110f0e0009ad4589014249857b54" exitCode=0 Jan 22 16:48:35 crc kubenswrapper[4704]: I0122 16:48:35.738899 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"48dfb2d3-192d-4033-afcf-1abfb1a31f59","Type":"ContainerDied","Data":"28790be91cfbb6ea3b03451d56f2689f92d2110f0e0009ad4589014249857b54"} Jan 22 16:48:36 crc kubenswrapper[4704]: I0122 16:48:36.757004 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8","Type":"ContainerStarted","Data":"5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb"} Jan 22 16:48:36 crc kubenswrapper[4704]: I0122 16:48:36.757611 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:36 crc kubenswrapper[4704]: I0122 16:48:36.780607 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.043750326 podStartE2EDuration="27.78058284s" podCreationTimestamp="2026-01-22 16:48:09 +0000 UTC" firstStartedPulling="2026-01-22 16:48:10.335367082 +0000 UTC m=+1182.979913782" lastFinishedPulling="2026-01-22 16:48:36.072199596 +0000 UTC m=+1208.716746296" observedRunningTime="2026-01-22 16:48:36.776395201 +0000 UTC m=+1209.420941911" watchObservedRunningTime="2026-01-22 16:48:36.78058284 +0000 UTC m=+1209.425129540" Jan 22 16:48:39 crc kubenswrapper[4704]: I0122 16:48:39.303780 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:39 crc kubenswrapper[4704]: I0122 16:48:39.304271 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:39 crc kubenswrapper[4704]: I0122 16:48:39.388142 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:39 crc kubenswrapper[4704]: I0122 16:48:39.866410 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 16:48:41 crc kubenswrapper[4704]: I0122 16:48:41.794823 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"48dfb2d3-192d-4033-afcf-1abfb1a31f59","Type":"ContainerStarted","Data":"eb0b2159f3c1c1a06aa8132ea2e7f8ee1665761a1c98b1bf54f8ab454745e43f"} Jan 22 16:48:41 crc kubenswrapper[4704]: I0122 16:48:41.797130 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"29d5ab67-1ca3-482c-987e-1f299f728372","Type":"ContainerStarted","Data":"99fb9373addcecd0349506a59cd1d6e42e4816c33e45b8128d6e638b9cc2613f"} Jan 22 16:48:41 crc kubenswrapper[4704]: I0122 16:48:41.797893 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:48:41 crc kubenswrapper[4704]: I0122 16:48:41.800945 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerStarted","Data":"7bc55e4f4588c32f852eb41beb1dfe56ea27ecbc823ae0a39153755f90d17cba"} Jan 22 16:48:41 crc kubenswrapper[4704]: I0122 16:48:41.828124 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.713735462 podStartE2EDuration="32.828101534s" podCreationTimestamp="2026-01-22 16:48:09 +0000 UTC" firstStartedPulling="2026-01-22 16:48:10.678572067 +0000 UTC m=+1183.323118767" lastFinishedPulling="2026-01-22 16:48:40.792938129 +0000 UTC m=+1213.437484839" observedRunningTime="2026-01-22 16:48:41.818382258 +0000 UTC m=+1214.462928988" watchObservedRunningTime="2026-01-22 16:48:41.828101534 +0000 UTC m=+1214.472648244" Jan 22 16:48:43 crc kubenswrapper[4704]: I0122 16:48:43.815567 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"48dfb2d3-192d-4033-afcf-1abfb1a31f59","Type":"ContainerStarted","Data":"a78ac36a54b6639d51dbe88720e68f4af65eba345b0651e3800a1d860cabcf83"} Jan 22 16:48:43 crc kubenswrapper[4704]: I0122 16:48:43.815993 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:43 crc kubenswrapper[4704]: I0122 16:48:43.818072 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerStarted","Data":"679e7706df18f7c90d3a1d74b797a7623a979c42d2c0dd9b230cf7c141d5a7ac"} Jan 22 16:48:43 crc kubenswrapper[4704]: I0122 16:48:43.823092 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 16:48:43 crc kubenswrapper[4704]: I0122 16:48:43.839655 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=7.318903886 podStartE2EDuration="33.839640845s" podCreationTimestamp="2026-01-22 16:48:10 +0000 UTC" firstStartedPulling="2026-01-22 16:48:11.696179803 +0000 UTC m=+1184.340726493" lastFinishedPulling="2026-01-22 16:48:38.216916752 +0000 UTC m=+1210.861463452" observedRunningTime="2026-01-22 16:48:43.835036404 +0000 UTC m=+1216.479583104" watchObservedRunningTime="2026-01-22 16:48:43.839640845 +0000 UTC m=+1216.484187545" Jan 22 16:48:44 crc kubenswrapper[4704]: I0122 16:48:44.732924 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.024934 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-g7j9b"] Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.026468 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.028568 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.034082 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-g7j9b"] Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.189300 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59966f70-fec7-4445-8284-f9216b4ca610-operator-scripts\") pod \"root-account-create-update-g7j9b\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.189346 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/59966f70-fec7-4445-8284-f9216b4ca610-kube-api-access-6g62h\") pod \"root-account-create-update-g7j9b\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.290178 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59966f70-fec7-4445-8284-f9216b4ca610-operator-scripts\") pod \"root-account-create-update-g7j9b\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.290225 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/59966f70-fec7-4445-8284-f9216b4ca610-kube-api-access-6g62h\") pod \"root-account-create-update-g7j9b\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.291150 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59966f70-fec7-4445-8284-f9216b4ca610-operator-scripts\") pod \"root-account-create-update-g7j9b\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.318472 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/59966f70-fec7-4445-8284-f9216b4ca610-kube-api-access-6g62h\") pod \"root-account-create-update-g7j9b\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.345155 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:48 crc kubenswrapper[4704]: I0122 16:48:48.934308 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-g7j9b"] Jan 22 16:48:48 crc kubenswrapper[4704]: W0122 16:48:48.945106 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59966f70_fec7_4445_8284_f9216b4ca610.slice/crio-502107a7d72af7c620b1ae8f34b6a8546f5282ce605ee69dd32f3960d260b5f8 WatchSource:0}: Error finding container 502107a7d72af7c620b1ae8f34b6a8546f5282ce605ee69dd32f3960d260b5f8: Status 404 returned error can't find the container with id 502107a7d72af7c620b1ae8f34b6a8546f5282ce605ee69dd32f3960d260b5f8 Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.287577 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-rh526"] Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.288827 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.299068 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-rh526"] Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.383473 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-d016-account-create-update-dgxmw"] Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.384733 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.389098 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.393909 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-d016-account-create-update-dgxmw"] Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.409446 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e480d-279d-4896-84a6-638c9b870958-operator-scripts\") pod \"keystone-db-create-rh526\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.409499 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8h5\" (UniqueName: \"kubernetes.io/projected/bc2e480d-279d-4896-84a6-638c9b870958-kube-api-access-gj8h5\") pod \"keystone-db-create-rh526\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.510819 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-operator-scripts\") pod \"keystone-d016-account-create-update-dgxmw\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.511059 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4sdb\" (UniqueName: \"kubernetes.io/projected/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-kube-api-access-v4sdb\") pod \"keystone-d016-account-create-update-dgxmw\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.511182 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e480d-279d-4896-84a6-638c9b870958-operator-scripts\") pod \"keystone-db-create-rh526\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.511265 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj8h5\" (UniqueName: \"kubernetes.io/projected/bc2e480d-279d-4896-84a6-638c9b870958-kube-api-access-gj8h5\") pod \"keystone-db-create-rh526\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.512142 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e480d-279d-4896-84a6-638c9b870958-operator-scripts\") pod \"keystone-db-create-rh526\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.531557 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj8h5\" (UniqueName: \"kubernetes.io/projected/bc2e480d-279d-4896-84a6-638c9b870958-kube-api-access-gj8h5\") pod \"keystone-db-create-rh526\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.606771 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.613081 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-operator-scripts\") pod \"keystone-d016-account-create-update-dgxmw\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.613195 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4sdb\" (UniqueName: \"kubernetes.io/projected/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-kube-api-access-v4sdb\") pod \"keystone-d016-account-create-update-dgxmw\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.613998 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-operator-scripts\") pod \"keystone-d016-account-create-update-dgxmw\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.632957 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4sdb\" (UniqueName: \"kubernetes.io/projected/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-kube-api-access-v4sdb\") pod \"keystone-d016-account-create-update-dgxmw\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.701103 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:49 crc kubenswrapper[4704]: I0122 16:48:49.859628 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-g7j9b" event={"ID":"59966f70-fec7-4445-8284-f9216b4ca610","Type":"ContainerStarted","Data":"502107a7d72af7c620b1ae8f34b6a8546f5282ce605ee69dd32f3960d260b5f8"} Jan 22 16:48:50 crc kubenswrapper[4704]: I0122 16:48:50.048548 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-rh526"] Jan 22 16:48:50 crc kubenswrapper[4704]: I0122 16:48:50.053330 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:48:50 crc kubenswrapper[4704]: I0122 16:48:50.174279 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-d016-account-create-update-dgxmw"] Jan 22 16:48:50 crc kubenswrapper[4704]: I0122 16:48:50.870926 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-rh526" event={"ID":"bc2e480d-279d-4896-84a6-638c9b870958","Type":"ContainerStarted","Data":"f716273cd44add144b3fdcea0a0097cccb3ae33624cbf446738012cc77d1f1ef"} Jan 22 16:48:50 crc kubenswrapper[4704]: I0122 16:48:50.874648 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-g7j9b" event={"ID":"59966f70-fec7-4445-8284-f9216b4ca610","Type":"ContainerStarted","Data":"7ce7aca866bc88ce286aed2e6b4312002f7e2bca81e58995f8ca8878cf634cbb"} Jan 22 16:48:50 crc kubenswrapper[4704]: I0122 16:48:50.878687 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" event={"ID":"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69","Type":"ContainerStarted","Data":"2a3a03978a183e12ac39483e24d9fe19dab7ca17085cadd90a27981f3d2547cc"} Jan 22 16:48:51 crc kubenswrapper[4704]: I0122 16:48:51.887448 4704 generic.go:334] "Generic (PLEG): container finished" podID="bc2e480d-279d-4896-84a6-638c9b870958" containerID="403176ccb83b11ca30c547005a1a5859a5e67e576901abf2d5b18f7088b0ad7f" exitCode=0 Jan 22 16:48:51 crc kubenswrapper[4704]: I0122 16:48:51.887571 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-rh526" event={"ID":"bc2e480d-279d-4896-84a6-638c9b870958","Type":"ContainerDied","Data":"403176ccb83b11ca30c547005a1a5859a5e67e576901abf2d5b18f7088b0ad7f"} Jan 22 16:48:51 crc kubenswrapper[4704]: I0122 16:48:51.891106 4704 generic.go:334] "Generic (PLEG): container finished" podID="59966f70-fec7-4445-8284-f9216b4ca610" containerID="7ce7aca866bc88ce286aed2e6b4312002f7e2bca81e58995f8ca8878cf634cbb" exitCode=0 Jan 22 16:48:51 crc kubenswrapper[4704]: I0122 16:48:51.891184 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-g7j9b" event={"ID":"59966f70-fec7-4445-8284-f9216b4ca610","Type":"ContainerDied","Data":"7ce7aca866bc88ce286aed2e6b4312002f7e2bca81e58995f8ca8878cf634cbb"} Jan 22 16:48:51 crc kubenswrapper[4704]: I0122 16:48:51.893161 4704 generic.go:334] "Generic (PLEG): container finished" podID="2ce39ad9-5a21-4580-9adc-e2e23fc4bc69" containerID="ba1f32027fd0f7d936c42b1430588f5284c4255ae8a28fc519c4f21563cfbbbc" exitCode=0 Jan 22 16:48:51 crc kubenswrapper[4704]: I0122 16:48:51.893199 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" event={"ID":"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69","Type":"ContainerDied","Data":"ba1f32027fd0f7d936c42b1430588f5284c4255ae8a28fc519c4f21563cfbbbc"} Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.546998 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.552556 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.557637 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.679918 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-operator-scripts\") pod \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.679985 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/59966f70-fec7-4445-8284-f9216b4ca610-kube-api-access-6g62h\") pod \"59966f70-fec7-4445-8284-f9216b4ca610\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.680037 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj8h5\" (UniqueName: \"kubernetes.io/projected/bc2e480d-279d-4896-84a6-638c9b870958-kube-api-access-gj8h5\") pod \"bc2e480d-279d-4896-84a6-638c9b870958\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.680091 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e480d-279d-4896-84a6-638c9b870958-operator-scripts\") pod \"bc2e480d-279d-4896-84a6-638c9b870958\" (UID: \"bc2e480d-279d-4896-84a6-638c9b870958\") " Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.680121 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4sdb\" (UniqueName: \"kubernetes.io/projected/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-kube-api-access-v4sdb\") pod \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\" (UID: \"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69\") " Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.680211 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59966f70-fec7-4445-8284-f9216b4ca610-operator-scripts\") pod \"59966f70-fec7-4445-8284-f9216b4ca610\" (UID: \"59966f70-fec7-4445-8284-f9216b4ca610\") " Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.680550 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2e480d-279d-4896-84a6-638c9b870958-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc2e480d-279d-4896-84a6-638c9b870958" (UID: "bc2e480d-279d-4896-84a6-638c9b870958"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.681030 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ce39ad9-5a21-4580-9adc-e2e23fc4bc69" (UID: "2ce39ad9-5a21-4580-9adc-e2e23fc4bc69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.681104 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59966f70-fec7-4445-8284-f9216b4ca610-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59966f70-fec7-4445-8284-f9216b4ca610" (UID: "59966f70-fec7-4445-8284-f9216b4ca610"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.685542 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2e480d-279d-4896-84a6-638c9b870958-kube-api-access-gj8h5" (OuterVolumeSpecName: "kube-api-access-gj8h5") pod "bc2e480d-279d-4896-84a6-638c9b870958" (UID: "bc2e480d-279d-4896-84a6-638c9b870958"). InnerVolumeSpecName "kube-api-access-gj8h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.685646 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59966f70-fec7-4445-8284-f9216b4ca610-kube-api-access-6g62h" (OuterVolumeSpecName: "kube-api-access-6g62h") pod "59966f70-fec7-4445-8284-f9216b4ca610" (UID: "59966f70-fec7-4445-8284-f9216b4ca610"). InnerVolumeSpecName "kube-api-access-6g62h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.687962 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-kube-api-access-v4sdb" (OuterVolumeSpecName: "kube-api-access-v4sdb") pod "2ce39ad9-5a21-4580-9adc-e2e23fc4bc69" (UID: "2ce39ad9-5a21-4580-9adc-e2e23fc4bc69"). InnerVolumeSpecName "kube-api-access-v4sdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.782429 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g62h\" (UniqueName: \"kubernetes.io/projected/59966f70-fec7-4445-8284-f9216b4ca610-kube-api-access-6g62h\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.782468 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj8h5\" (UniqueName: \"kubernetes.io/projected/bc2e480d-279d-4896-84a6-638c9b870958-kube-api-access-gj8h5\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.782479 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e480d-279d-4896-84a6-638c9b870958-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.782488 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4sdb\" (UniqueName: \"kubernetes.io/projected/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-kube-api-access-v4sdb\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.782497 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59966f70-fec7-4445-8284-f9216b4ca610-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.782506 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.911306 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerStarted","Data":"f42c1e3bd5a41c92dedaa241cbbf7dad767525f34daa693d5182538a04c21a47"} Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.912877 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-rh526" event={"ID":"bc2e480d-279d-4896-84a6-638c9b870958","Type":"ContainerDied","Data":"f716273cd44add144b3fdcea0a0097cccb3ae33624cbf446738012cc77d1f1ef"} Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.912924 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f716273cd44add144b3fdcea0a0097cccb3ae33624cbf446738012cc77d1f1ef" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.912982 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-rh526" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.917819 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-g7j9b" event={"ID":"59966f70-fec7-4445-8284-f9216b4ca610","Type":"ContainerDied","Data":"502107a7d72af7c620b1ae8f34b6a8546f5282ce605ee69dd32f3960d260b5f8"} Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.917847 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-g7j9b" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.917855 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="502107a7d72af7c620b1ae8f34b6a8546f5282ce605ee69dd32f3960d260b5f8" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.919389 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" event={"ID":"2ce39ad9-5a21-4580-9adc-e2e23fc4bc69","Type":"ContainerDied","Data":"2a3a03978a183e12ac39483e24d9fe19dab7ca17085cadd90a27981f3d2547cc"} Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.919417 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a3a03978a183e12ac39483e24d9fe19dab7ca17085cadd90a27981f3d2547cc" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.919471 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d016-account-create-update-dgxmw" Jan 22 16:48:53 crc kubenswrapper[4704]: I0122 16:48:53.955609 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=5.020579569 podStartE2EDuration="44.955585923s" podCreationTimestamp="2026-01-22 16:48:09 +0000 UTC" firstStartedPulling="2026-01-22 16:48:13.611096467 +0000 UTC m=+1186.255643167" lastFinishedPulling="2026-01-22 16:48:53.546102821 +0000 UTC m=+1226.190649521" observedRunningTime="2026-01-22 16:48:53.950359185 +0000 UTC m=+1226.594905915" watchObservedRunningTime="2026-01-22 16:48:53.955585923 +0000 UTC m=+1226.600132623" Jan 22 16:48:56 crc kubenswrapper[4704]: I0122 16:48:56.894666 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:56 crc kubenswrapper[4704]: I0122 16:48:56.895156 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:56 crc kubenswrapper[4704]: I0122 16:48:56.898971 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:56 crc kubenswrapper[4704]: I0122 16:48:56.945692 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:48:57 crc kubenswrapper[4704]: I0122 16:48:57.824949 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76479b6979-x64kd" podUID="31ee5638-ee25-460d-ac71-44e5a9aafc9b" containerName="console" containerID="cri-o://c7efb83ef3f1befbb64b04b9016030b2825ef234d9f4e6f610306ae4c2e72139" gracePeriod=15 Jan 22 16:48:57 crc kubenswrapper[4704]: I0122 16:48:57.968669 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76479b6979-x64kd_31ee5638-ee25-460d-ac71-44e5a9aafc9b/console/0.log" Jan 22 16:48:57 crc kubenswrapper[4704]: I0122 16:48:57.968729 4704 generic.go:334] "Generic (PLEG): container finished" podID="31ee5638-ee25-460d-ac71-44e5a9aafc9b" containerID="c7efb83ef3f1befbb64b04b9016030b2825ef234d9f4e6f610306ae4c2e72139" exitCode=2 Jan 22 16:48:57 crc kubenswrapper[4704]: I0122 16:48:57.971519 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76479b6979-x64kd" event={"ID":"31ee5638-ee25-460d-ac71-44e5a9aafc9b","Type":"ContainerDied","Data":"c7efb83ef3f1befbb64b04b9016030b2825ef234d9f4e6f610306ae4c2e72139"} Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.297567 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76479b6979-x64kd_31ee5638-ee25-460d-ac71-44e5a9aafc9b/console/0.log" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.297959 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.358281 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-trusted-ca-bundle\") pod \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.358329 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-serving-cert\") pod \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.358428 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-service-ca\") pod \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.358480 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x8tk\" (UniqueName: \"kubernetes.io/projected/31ee5638-ee25-460d-ac71-44e5a9aafc9b-kube-api-access-6x8tk\") pod \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.358513 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-config\") pod \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.358549 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-oauth-config\") pod \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.358620 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-oauth-serving-cert\") pod \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\" (UID: \"31ee5638-ee25-460d-ac71-44e5a9aafc9b\") " Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.359808 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "31ee5638-ee25-460d-ac71-44e5a9aafc9b" (UID: "31ee5638-ee25-460d-ac71-44e5a9aafc9b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.360324 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "31ee5638-ee25-460d-ac71-44e5a9aafc9b" (UID: "31ee5638-ee25-460d-ac71-44e5a9aafc9b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.361545 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-service-ca" (OuterVolumeSpecName: "service-ca") pod "31ee5638-ee25-460d-ac71-44e5a9aafc9b" (UID: "31ee5638-ee25-460d-ac71-44e5a9aafc9b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.362096 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-config" (OuterVolumeSpecName: "console-config") pod "31ee5638-ee25-460d-ac71-44e5a9aafc9b" (UID: "31ee5638-ee25-460d-ac71-44e5a9aafc9b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.366446 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "31ee5638-ee25-460d-ac71-44e5a9aafc9b" (UID: "31ee5638-ee25-460d-ac71-44e5a9aafc9b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.369030 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "31ee5638-ee25-460d-ac71-44e5a9aafc9b" (UID: "31ee5638-ee25-460d-ac71-44e5a9aafc9b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.369191 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ee5638-ee25-460d-ac71-44e5a9aafc9b-kube-api-access-6x8tk" (OuterVolumeSpecName: "kube-api-access-6x8tk") pod "31ee5638-ee25-460d-ac71-44e5a9aafc9b" (UID: "31ee5638-ee25-460d-ac71-44e5a9aafc9b"). InnerVolumeSpecName "kube-api-access-6x8tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.460212 4704 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.460246 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x8tk\" (UniqueName: \"kubernetes.io/projected/31ee5638-ee25-460d-ac71-44e5a9aafc9b-kube-api-access-6x8tk\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.460257 4704 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.460267 4704 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.460275 4704 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.460285 4704 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ee5638-ee25-460d-ac71-44e5a9aafc9b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.460295 4704 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ee5638-ee25-460d-ac71-44e5a9aafc9b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.979202 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76479b6979-x64kd_31ee5638-ee25-460d-ac71-44e5a9aafc9b/console/0.log" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.979269 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76479b6979-x64kd" event={"ID":"31ee5638-ee25-460d-ac71-44e5a9aafc9b","Type":"ContainerDied","Data":"ed0c12d63fccfe7c55bfff34ae7dfd9c65ee50ead615d408f2a4f08f77d763cd"} Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.979324 4704 scope.go:117] "RemoveContainer" containerID="c7efb83ef3f1befbb64b04b9016030b2825ef234d9f4e6f610306ae4c2e72139" Jan 22 16:48:58 crc kubenswrapper[4704]: I0122 16:48:58.979360 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76479b6979-x64kd" Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.017344 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76479b6979-x64kd"] Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.025188 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76479b6979-x64kd"] Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.609375 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.609999 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="prometheus" containerID="cri-o://7bc55e4f4588c32f852eb41beb1dfe56ea27ecbc823ae0a39153755f90d17cba" gracePeriod=600 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.610082 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="thanos-sidecar" containerID="cri-o://f42c1e3bd5a41c92dedaa241cbbf7dad767525f34daa693d5182538a04c21a47" gracePeriod=600 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.610096 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="config-reloader" containerID="cri-o://679e7706df18f7c90d3a1d74b797a7623a979c42d2c0dd9b230cf7c141d5a7ac" gracePeriod=600 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.643390 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ee5638-ee25-460d-ac71-44e5a9aafc9b" path="/var/lib/kubelet/pods/31ee5638-ee25-460d-ac71-44e5a9aafc9b/volumes" Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.987956 4704 generic.go:334] "Generic (PLEG): container finished" podID="e2ef8e1a-f771-48a2-a61b-866950a3f0a0" containerID="fe6c65fa30bcdd8d9393fda49e8704794cd648111b4d02191c3b3705189cd9bf" exitCode=0 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.988031 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"e2ef8e1a-f771-48a2-a61b-866950a3f0a0","Type":"ContainerDied","Data":"fe6c65fa30bcdd8d9393fda49e8704794cd648111b4d02191c3b3705189cd9bf"} Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.993088 4704 generic.go:334] "Generic (PLEG): container finished" podID="1b171faa-1b29-41f7-9582-8e8003603f75" containerID="4645b94dd89746cbb4c52287082d185afbc7208c37da980b73e1265dfdb97163" exitCode=0 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.993149 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1b171faa-1b29-41f7-9582-8e8003603f75","Type":"ContainerDied","Data":"4645b94dd89746cbb4c52287082d185afbc7208c37da980b73e1265dfdb97163"} Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.997414 4704 generic.go:334] "Generic (PLEG): container finished" podID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerID="f42c1e3bd5a41c92dedaa241cbbf7dad767525f34daa693d5182538a04c21a47" exitCode=0 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.997437 4704 generic.go:334] "Generic (PLEG): container finished" podID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerID="679e7706df18f7c90d3a1d74b797a7623a979c42d2c0dd9b230cf7c141d5a7ac" exitCode=0 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.997446 4704 generic.go:334] "Generic (PLEG): container finished" podID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerID="7bc55e4f4588c32f852eb41beb1dfe56ea27ecbc823ae0a39153755f90d17cba" exitCode=0 Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.997462 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerDied","Data":"f42c1e3bd5a41c92dedaa241cbbf7dad767525f34daa693d5182538a04c21a47"} Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.997483 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerDied","Data":"679e7706df18f7c90d3a1d74b797a7623a979c42d2c0dd9b230cf7c141d5a7ac"} Jan 22 16:48:59 crc kubenswrapper[4704]: I0122 16:48:59.997493 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerDied","Data":"7bc55e4f4588c32f852eb41beb1dfe56ea27ecbc823ae0a39153755f90d17cba"} Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.572419 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694419 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-config\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694479 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-thanos-prometheus-http-client-file\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694515 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-1\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694536 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-2\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694578 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksrvr\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-kube-api-access-ksrvr\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694601 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-web-config\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694629 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3117e69-0a16-4403-a4a5-c35e78f711e6-config-out\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694660 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-0\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694684 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-tls-assets\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.694816 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"e3117e69-0a16-4403-a4a5-c35e78f711e6\" (UID: \"e3117e69-0a16-4403-a4a5-c35e78f711e6\") " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.695819 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.695896 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.696468 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.707498 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-config" (OuterVolumeSpecName: "config") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.707508 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3117e69-0a16-4403-a4a5-c35e78f711e6-config-out" (OuterVolumeSpecName: "config-out") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.707561 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.707570 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.707778 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-kube-api-access-ksrvr" (OuterVolumeSpecName: "kube-api-access-ksrvr") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "kube-api-access-ksrvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.719356 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-web-config" (OuterVolumeSpecName: "web-config") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.722501 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e3117e69-0a16-4403-a4a5-c35e78f711e6" (UID: "e3117e69-0a16-4403-a4a5-c35e78f711e6"). InnerVolumeSpecName "pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796327 4704 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796361 4704 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796419 4704 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") on node \"crc\" " Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796430 4704 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796441 4704 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796451 4704 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796460 4704 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e3117e69-0a16-4403-a4a5-c35e78f711e6-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796471 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksrvr\" (UniqueName: \"kubernetes.io/projected/e3117e69-0a16-4403-a4a5-c35e78f711e6-kube-api-access-ksrvr\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796481 4704 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e3117e69-0a16-4403-a4a5-c35e78f711e6-web-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.796491 4704 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e3117e69-0a16-4403-a4a5-c35e78f711e6-config-out\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.815535 4704 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.815692 4704 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69") on node "crc" Jan 22 16:49:00 crc kubenswrapper[4704]: I0122 16:49:00.898320 4704 reconciler_common.go:293] "Volume detached for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.007592 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"1b171faa-1b29-41f7-9582-8e8003603f75","Type":"ContainerStarted","Data":"c4435433a9802e0c0331d8b61dc47c6482879698b6142eb59e0d5b12f9e40de4"} Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.007825 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.011237 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"e3117e69-0a16-4403-a4a5-c35e78f711e6","Type":"ContainerDied","Data":"f7fd926b9d2a5a1e7c74261c12cb03397263c014920a5b56434bc93a7b3843a6"} Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.011267 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.011281 4704 scope.go:117] "RemoveContainer" containerID="f42c1e3bd5a41c92dedaa241cbbf7dad767525f34daa693d5182538a04c21a47" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.015423 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"e2ef8e1a-f771-48a2-a61b-866950a3f0a0","Type":"ContainerStarted","Data":"c1d38a953fbca9a3b19c7317009c7358db6b253877dbbdc699cbdda87da32e76"} Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.015713 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.031998 4704 scope.go:117] "RemoveContainer" containerID="679e7706df18f7c90d3a1d74b797a7623a979c42d2c0dd9b230cf7c141d5a7ac" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.045863 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=39.502912288 podStartE2EDuration="56.045839082s" podCreationTimestamp="2026-01-22 16:48:05 +0000 UTC" firstStartedPulling="2026-01-22 16:48:07.531156517 +0000 UTC m=+1180.175703217" lastFinishedPulling="2026-01-22 16:48:24.074083311 +0000 UTC m=+1196.718630011" observedRunningTime="2026-01-22 16:49:01.038494014 +0000 UTC m=+1233.683040714" watchObservedRunningTime="2026-01-22 16:49:01.045839082 +0000 UTC m=+1233.690385782" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.054805 4704 scope.go:117] "RemoveContainer" containerID="7bc55e4f4588c32f852eb41beb1dfe56ea27ecbc823ae0a39153755f90d17cba" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.069572 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=39.671842064 podStartE2EDuration="55.069553744s" podCreationTimestamp="2026-01-22 16:48:06 +0000 UTC" firstStartedPulling="2026-01-22 16:48:08.570748747 +0000 UTC m=+1181.215295447" lastFinishedPulling="2026-01-22 16:48:23.968460427 +0000 UTC m=+1196.613007127" observedRunningTime="2026-01-22 16:49:01.067371792 +0000 UTC m=+1233.711918512" watchObservedRunningTime="2026-01-22 16:49:01.069553744 +0000 UTC m=+1233.714100444" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.087712 4704 scope.go:117] "RemoveContainer" containerID="4b6b47ea6989e53f260594ddd9d48694140a8288ea384b7bbd60c9570eefc051" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.105384 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.112292 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.117577 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118008 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="config-reloader" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118024 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="config-reloader" Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118047 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ee5638-ee25-460d-ac71-44e5a9aafc9b" containerName="console" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118054 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ee5638-ee25-460d-ac71-44e5a9aafc9b" containerName="console" Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118070 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce39ad9-5a21-4580-9adc-e2e23fc4bc69" containerName="mariadb-account-create-update" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118077 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce39ad9-5a21-4580-9adc-e2e23fc4bc69" containerName="mariadb-account-create-update" Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118086 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2e480d-279d-4896-84a6-638c9b870958" containerName="mariadb-database-create" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118093 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2e480d-279d-4896-84a6-638c9b870958" containerName="mariadb-database-create" Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118109 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="init-config-reloader" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118115 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="init-config-reloader" Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118125 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59966f70-fec7-4445-8284-f9216b4ca610" containerName="mariadb-account-create-update" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118131 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="59966f70-fec7-4445-8284-f9216b4ca610" containerName="mariadb-account-create-update" Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118149 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="thanos-sidecar" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118155 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="thanos-sidecar" Jan 22 16:49:01 crc kubenswrapper[4704]: E0122 16:49:01.118173 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="prometheus" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118178 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="prometheus" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118429 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ee5638-ee25-460d-ac71-44e5a9aafc9b" containerName="console" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118476 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="config-reloader" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118521 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="thanos-sidecar" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118532 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce39ad9-5a21-4580-9adc-e2e23fc4bc69" containerName="mariadb-account-create-update" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118541 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="59966f70-fec7-4445-8284-f9216b4ca610" containerName="mariadb-account-create-update" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118578 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2e480d-279d-4896-84a6-638c9b870958" containerName="mariadb-database-create" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.118591 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" containerName="prometheus" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.121285 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.137304 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.137446 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-jm6cw" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.137523 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.137654 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.137832 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.138263 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.138467 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.139818 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.142914 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.172641 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.203577 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.204002 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxh56\" (UniqueName: \"kubernetes.io/projected/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-kube-api-access-kxh56\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.204131 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.204232 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.204359 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.204496 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.204697 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.205000 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.205181 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.205285 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.205357 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.205439 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.205529 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-config\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.306951 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307240 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307316 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307399 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307483 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-config\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307570 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307655 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxh56\" (UniqueName: \"kubernetes.io/projected/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-kube-api-access-kxh56\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307727 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307809 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307889 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.307968 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.308047 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.308231 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.308715 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.309497 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.309738 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.312735 4704 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.312779 4704 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f6cee35d84fb12089ddcd0f9d057c4fa69f92d7ca02888ccd6b2ec4e6b69478/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.313117 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.313218 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.313338 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.313832 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.314104 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.314717 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-config\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.315414 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.315472 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.325582 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxh56\" (UniqueName: \"kubernetes.io/projected/45beee7e-d2c1-4150-a2d1-f9a6bf02eb42-kube-api-access-kxh56\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.334626 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a496ae6e-ec56-42bd-9a71-5e907eb90e69\") pod \"prometheus-metric-storage-0\" (UID: \"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.461939 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.654759 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3117e69-0a16-4403-a4a5-c35e78f711e6" path="/var/lib/kubelet/pods/e3117e69-0a16-4403-a4a5-c35e78f711e6/volumes" Jan 22 16:49:01 crc kubenswrapper[4704]: I0122 16:49:01.802561 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 16:49:01 crc kubenswrapper[4704]: W0122 16:49:01.820961 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45beee7e_d2c1_4150_a2d1_f9a6bf02eb42.slice/crio-e90d7e7f53dd76ca19110466faf36096c82c493704ce1ac2f351d773deb27d13 WatchSource:0}: Error finding container e90d7e7f53dd76ca19110466faf36096c82c493704ce1ac2f351d773deb27d13: Status 404 returned error can't find the container with id e90d7e7f53dd76ca19110466faf36096c82c493704ce1ac2f351d773deb27d13 Jan 22 16:49:02 crc kubenswrapper[4704]: I0122 16:49:02.025668 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42","Type":"ContainerStarted","Data":"e90d7e7f53dd76ca19110466faf36096c82c493704ce1ac2f351d773deb27d13"} Jan 22 16:49:05 crc kubenswrapper[4704]: I0122 16:49:05.061074 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42","Type":"ContainerStarted","Data":"827cf3fb469be5f3121783cffbd0f6bfb696ee5407a78b506ff85506740af04e"} Jan 22 16:49:12 crc kubenswrapper[4704]: I0122 16:49:12.113668 4704 generic.go:334] "Generic (PLEG): container finished" podID="45beee7e-d2c1-4150-a2d1-f9a6bf02eb42" containerID="827cf3fb469be5f3121783cffbd0f6bfb696ee5407a78b506ff85506740af04e" exitCode=0 Jan 22 16:49:12 crc kubenswrapper[4704]: I0122 16:49:12.113762 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42","Type":"ContainerDied","Data":"827cf3fb469be5f3121783cffbd0f6bfb696ee5407a78b506ff85506740af04e"} Jan 22 16:49:13 crc kubenswrapper[4704]: I0122 16:49:13.128100 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42","Type":"ContainerStarted","Data":"aa05e3c31f27a8c2f3ad5da08ad891630567be9e0e42cb24b3a6f512871b542b"} Jan 22 16:49:15 crc kubenswrapper[4704]: I0122 16:49:15.145703 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42","Type":"ContainerStarted","Data":"811fcec0ddf2edf87659bff34dc7948fd55756ace2aabe05e8a451ebe753010b"} Jan 22 16:49:15 crc kubenswrapper[4704]: I0122 16:49:15.146283 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"45beee7e-d2c1-4150-a2d1-f9a6bf02eb42","Type":"ContainerStarted","Data":"a337ce52cffde5aa970ddab288f59568ce26f3e35eade2469e25139c8bb12d63"} Jan 22 16:49:15 crc kubenswrapper[4704]: I0122 16:49:15.185059 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=14.185041061 podStartE2EDuration="14.185041061s" podCreationTimestamp="2026-01-22 16:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:49:15.181320795 +0000 UTC m=+1247.825867495" watchObservedRunningTime="2026-01-22 16:49:15.185041061 +0000 UTC m=+1247.829587761" Jan 22 16:49:16 crc kubenswrapper[4704]: I0122 16:49:16.462951 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:16 crc kubenswrapper[4704]: I0122 16:49:16.463013 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:16 crc kubenswrapper[4704]: I0122 16:49:16.469692 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:17 crc kubenswrapper[4704]: I0122 16:49:17.054050 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 16:49:17 crc kubenswrapper[4704]: I0122 16:49:17.177684 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 16:49:17 crc kubenswrapper[4704]: I0122 16:49:17.893034 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.086452 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.086857 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.364434 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-kw9c6"] Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.365376 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.368380 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.368529 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-7ktr4" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.373130 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.373191 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.381372 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-kw9c6"] Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.402938 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-config-data\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.403028 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzbmx\" (UniqueName: \"kubernetes.io/projected/e980c7c4-ea1e-4496-a188-da0c060ccbb3-kube-api-access-hzbmx\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.403105 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-combined-ca-bundle\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.504479 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-combined-ca-bundle\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.504530 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-config-data\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.504617 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzbmx\" (UniqueName: \"kubernetes.io/projected/e980c7c4-ea1e-4496-a188-da0c060ccbb3-kube-api-access-hzbmx\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.512237 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-config-data\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.522329 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-combined-ca-bundle\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.524882 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzbmx\" (UniqueName: \"kubernetes.io/projected/e980c7c4-ea1e-4496-a188-da0c060ccbb3-kube-api-access-hzbmx\") pod \"keystone-db-sync-kw9c6\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:19 crc kubenswrapper[4704]: I0122 16:49:19.684150 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:20 crc kubenswrapper[4704]: I0122 16:49:20.165537 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-kw9c6"] Jan 22 16:49:20 crc kubenswrapper[4704]: I0122 16:49:20.187973 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" event={"ID":"e980c7c4-ea1e-4496-a188-da0c060ccbb3","Type":"ContainerStarted","Data":"a141021753a8970c96899e11b4aa5ffdbef6b3256c8375d989efc9d812064995"} Jan 22 16:49:28 crc kubenswrapper[4704]: I0122 16:49:28.262627 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" event={"ID":"e980c7c4-ea1e-4496-a188-da0c060ccbb3","Type":"ContainerStarted","Data":"1cbaa70673d363d3b1484242899ac4ae72d21e2821aedebf1ed3c7c86b666fce"} Jan 22 16:49:28 crc kubenswrapper[4704]: I0122 16:49:28.281078 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" podStartSLOduration=1.488324672 podStartE2EDuration="9.281062498s" podCreationTimestamp="2026-01-22 16:49:19 +0000 UTC" firstStartedPulling="2026-01-22 16:49:20.180395446 +0000 UTC m=+1252.824942146" lastFinishedPulling="2026-01-22 16:49:27.973133272 +0000 UTC m=+1260.617679972" observedRunningTime="2026-01-22 16:49:28.279903495 +0000 UTC m=+1260.924450205" watchObservedRunningTime="2026-01-22 16:49:28.281062498 +0000 UTC m=+1260.925609198" Jan 22 16:49:32 crc kubenswrapper[4704]: I0122 16:49:32.298121 4704 generic.go:334] "Generic (PLEG): container finished" podID="e980c7c4-ea1e-4496-a188-da0c060ccbb3" containerID="1cbaa70673d363d3b1484242899ac4ae72d21e2821aedebf1ed3c7c86b666fce" exitCode=0 Jan 22 16:49:32 crc kubenswrapper[4704]: I0122 16:49:32.298773 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" event={"ID":"e980c7c4-ea1e-4496-a188-da0c060ccbb3","Type":"ContainerDied","Data":"1cbaa70673d363d3b1484242899ac4ae72d21e2821aedebf1ed3c7c86b666fce"} Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.626734 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.757649 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-combined-ca-bundle\") pod \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.757903 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzbmx\" (UniqueName: \"kubernetes.io/projected/e980c7c4-ea1e-4496-a188-da0c060ccbb3-kube-api-access-hzbmx\") pod \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.757958 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-config-data\") pod \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\" (UID: \"e980c7c4-ea1e-4496-a188-da0c060ccbb3\") " Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.762816 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e980c7c4-ea1e-4496-a188-da0c060ccbb3-kube-api-access-hzbmx" (OuterVolumeSpecName: "kube-api-access-hzbmx") pod "e980c7c4-ea1e-4496-a188-da0c060ccbb3" (UID: "e980c7c4-ea1e-4496-a188-da0c060ccbb3"). InnerVolumeSpecName "kube-api-access-hzbmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.781957 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e980c7c4-ea1e-4496-a188-da0c060ccbb3" (UID: "e980c7c4-ea1e-4496-a188-da0c060ccbb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.797269 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-config-data" (OuterVolumeSpecName: "config-data") pod "e980c7c4-ea1e-4496-a188-da0c060ccbb3" (UID: "e980c7c4-ea1e-4496-a188-da0c060ccbb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.859710 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.859743 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e980c7c4-ea1e-4496-a188-da0c060ccbb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:33 crc kubenswrapper[4704]: I0122 16:49:33.859757 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzbmx\" (UniqueName: \"kubernetes.io/projected/e980c7c4-ea1e-4496-a188-da0c060ccbb3-kube-api-access-hzbmx\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.314866 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" event={"ID":"e980c7c4-ea1e-4496-a188-da0c060ccbb3","Type":"ContainerDied","Data":"a141021753a8970c96899e11b4aa5ffdbef6b3256c8375d989efc9d812064995"} Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.314905 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-kw9c6" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.314910 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a141021753a8970c96899e11b4aa5ffdbef6b3256c8375d989efc9d812064995" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.543365 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-pl8ml"] Jan 22 16:49:34 crc kubenswrapper[4704]: E0122 16:49:34.543878 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e980c7c4-ea1e-4496-a188-da0c060ccbb3" containerName="keystone-db-sync" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.543912 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e980c7c4-ea1e-4496-a188-da0c060ccbb3" containerName="keystone-db-sync" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.545754 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e980c7c4-ea1e-4496-a188-da0c060ccbb3" containerName="keystone-db-sync" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.547394 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.559873 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.561212 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.561629 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-7ktr4" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.562153 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.564430 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.591571 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-pl8ml"] Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.799402 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-credential-keys\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.799486 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdtd2\" (UniqueName: \"kubernetes.io/projected/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-kube-api-access-vdtd2\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.800891 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-config-data\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.800958 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-combined-ca-bundle\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.801031 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-scripts\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.801099 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-fernet-keys\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.903834 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-scripts\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.903917 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-fernet-keys\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.903980 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-credential-keys\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.904058 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdtd2\" (UniqueName: \"kubernetes.io/projected/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-kube-api-access-vdtd2\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.904110 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-config-data\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.904140 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-combined-ca-bundle\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.909346 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-scripts\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.923322 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-credential-keys\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.926007 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-combined-ca-bundle\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.928528 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-fernet-keys\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.934136 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-config-data\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.940623 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdtd2\" (UniqueName: \"kubernetes.io/projected/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-kube-api-access-vdtd2\") pod \"keystone-bootstrap-pl8ml\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.989363 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:49:34 crc kubenswrapper[4704]: I0122 16:49:34.991120 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.001015 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.001160 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.006929 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.111547 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-log-httpd\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.111639 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.111661 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-config-data\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.111715 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-run-httpd\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.111739 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4cpl\" (UniqueName: \"kubernetes.io/projected/5f252b90-0bee-45f5-b28d-9cb41b6de684-kube-api-access-r4cpl\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.111767 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.111783 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-scripts\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.145403 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.214685 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-run-httpd\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.215059 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4cpl\" (UniqueName: \"kubernetes.io/projected/5f252b90-0bee-45f5-b28d-9cb41b6de684-kube-api-access-r4cpl\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.215095 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.215123 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-scripts\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.215166 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-log-httpd\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.215194 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.215211 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-config-data\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.215374 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-run-httpd\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.219036 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-log-httpd\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.220497 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-config-data\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.220650 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.222495 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.223249 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-scripts\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.247938 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4cpl\" (UniqueName: \"kubernetes.io/projected/5f252b90-0bee-45f5-b28d-9cb41b6de684-kube-api-access-r4cpl\") pod \"ceilometer-0\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.357497 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.656154 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-pl8ml"] Jan 22 16:49:35 crc kubenswrapper[4704]: I0122 16:49:35.815554 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:49:35 crc kubenswrapper[4704]: W0122 16:49:35.818082 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f252b90_0bee_45f5_b28d_9cb41b6de684.slice/crio-92ef9724b83d3ca646ceadf0bb1de238dd3777c5131fb65b09191d83126edc4b WatchSource:0}: Error finding container 92ef9724b83d3ca646ceadf0bb1de238dd3777c5131fb65b09191d83126edc4b: Status 404 returned error can't find the container with id 92ef9724b83d3ca646ceadf0bb1de238dd3777c5131fb65b09191d83126edc4b Jan 22 16:49:36 crc kubenswrapper[4704]: I0122 16:49:36.334842 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" event={"ID":"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84","Type":"ContainerStarted","Data":"bd781ce7f268cfa7db8b9de30cae702a6786d112de0c8dd39f040f99949a9ddc"} Jan 22 16:49:36 crc kubenswrapper[4704]: I0122 16:49:36.335286 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" event={"ID":"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84","Type":"ContainerStarted","Data":"b157c122d03f3e2da170f268d5de0694e59a5240808323984e8a72779d07bf06"} Jan 22 16:49:36 crc kubenswrapper[4704]: I0122 16:49:36.336466 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerStarted","Data":"92ef9724b83d3ca646ceadf0bb1de238dd3777c5131fb65b09191d83126edc4b"} Jan 22 16:49:36 crc kubenswrapper[4704]: I0122 16:49:36.361675 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" podStartSLOduration=2.361527436 podStartE2EDuration="2.361527436s" podCreationTimestamp="2026-01-22 16:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:49:36.350863714 +0000 UTC m=+1268.995410414" watchObservedRunningTime="2026-01-22 16:49:36.361527436 +0000 UTC m=+1269.006074136" Jan 22 16:49:36 crc kubenswrapper[4704]: I0122 16:49:36.697779 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:49:40 crc kubenswrapper[4704]: I0122 16:49:40.382264 4704 generic.go:334] "Generic (PLEG): container finished" podID="c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" containerID="bd781ce7f268cfa7db8b9de30cae702a6786d112de0c8dd39f040f99949a9ddc" exitCode=0 Jan 22 16:49:40 crc kubenswrapper[4704]: I0122 16:49:40.382370 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" event={"ID":"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84","Type":"ContainerDied","Data":"bd781ce7f268cfa7db8b9de30cae702a6786d112de0c8dd39f040f99949a9ddc"} Jan 22 16:49:41 crc kubenswrapper[4704]: I0122 16:49:41.904993 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.026729 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdtd2\" (UniqueName: \"kubernetes.io/projected/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-kube-api-access-vdtd2\") pod \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.027137 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-scripts\") pod \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.027189 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-combined-ca-bundle\") pod \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.027221 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-fernet-keys\") pod \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.027252 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-config-data\") pod \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.027288 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-credential-keys\") pod \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\" (UID: \"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84\") " Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.033428 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" (UID: "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.033830 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" (UID: "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.033864 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-kube-api-access-vdtd2" (OuterVolumeSpecName: "kube-api-access-vdtd2") pod "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" (UID: "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84"). InnerVolumeSpecName "kube-api-access-vdtd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.041058 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-scripts" (OuterVolumeSpecName: "scripts") pod "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" (UID: "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.054568 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" (UID: "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.065994 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-config-data" (OuterVolumeSpecName: "config-data") pod "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" (UID: "c6b0d908-b4c0-40c5-8ec6-72c18d7caf84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.128717 4704 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.128765 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.128776 4704 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.128805 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdtd2\" (UniqueName: \"kubernetes.io/projected/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-kube-api-access-vdtd2\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.128819 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.128829 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.401189 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" event={"ID":"c6b0d908-b4c0-40c5-8ec6-72c18d7caf84","Type":"ContainerDied","Data":"b157c122d03f3e2da170f268d5de0694e59a5240808323984e8a72779d07bf06"} Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.401241 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b157c122d03f3e2da170f268d5de0694e59a5240808323984e8a72779d07bf06" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.401251 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-pl8ml" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.577942 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-pl8ml"] Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.587870 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-pl8ml"] Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.662939 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-jmxpt"] Jan 22 16:49:42 crc kubenswrapper[4704]: E0122 16:49:42.663387 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" containerName="keystone-bootstrap" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.663413 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" containerName="keystone-bootstrap" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.663605 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" containerName="keystone-bootstrap" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.664977 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.667135 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.667602 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.668073 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-7ktr4" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.668153 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.668074 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.696890 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-jmxpt"] Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.747574 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-credential-keys\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.747623 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-fernet-keys\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.747674 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-combined-ca-bundle\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.747696 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-config-data\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.747737 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jx5\" (UniqueName: \"kubernetes.io/projected/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-kube-api-access-29jx5\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.747761 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-scripts\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.849127 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29jx5\" (UniqueName: \"kubernetes.io/projected/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-kube-api-access-29jx5\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.849526 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-scripts\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.849600 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-credential-keys\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.849626 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-fernet-keys\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.849688 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-combined-ca-bundle\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.849709 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-config-data\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.856763 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-fernet-keys\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.858179 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-config-data\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.859910 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-scripts\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.860323 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-combined-ca-bundle\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.862204 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-credential-keys\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.881415 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29jx5\" (UniqueName: \"kubernetes.io/projected/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-kube-api-access-29jx5\") pod \"keystone-bootstrap-jmxpt\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:42 crc kubenswrapper[4704]: I0122 16:49:42.997239 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:43 crc kubenswrapper[4704]: I0122 16:49:43.409012 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerStarted","Data":"034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c"} Jan 22 16:49:43 crc kubenswrapper[4704]: I0122 16:49:43.435173 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-jmxpt"] Jan 22 16:49:43 crc kubenswrapper[4704]: W0122 16:49:43.578413 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6966404_cd7f_426d_ab2e_f7a6cf2c8959.slice/crio-1d1c2e2ab2f6e8928315bab52ba427bb15825cfd8d59b889d83d50daadabc8de WatchSource:0}: Error finding container 1d1c2e2ab2f6e8928315bab52ba427bb15825cfd8d59b889d83d50daadabc8de: Status 404 returned error can't find the container with id 1d1c2e2ab2f6e8928315bab52ba427bb15825cfd8d59b889d83d50daadabc8de Jan 22 16:49:43 crc kubenswrapper[4704]: I0122 16:49:43.650724 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b0d908-b4c0-40c5-8ec6-72c18d7caf84" path="/var/lib/kubelet/pods/c6b0d908-b4c0-40c5-8ec6-72c18d7caf84/volumes" Jan 22 16:49:44 crc kubenswrapper[4704]: I0122 16:49:44.417974 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" event={"ID":"e6966404-cd7f-426d-ab2e-f7a6cf2c8959","Type":"ContainerStarted","Data":"51aba93cbc57783b7925f69e5e1b668a2a53d2b7e61ea22b550798c72b4c6bb5"} Jan 22 16:49:44 crc kubenswrapper[4704]: I0122 16:49:44.418253 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" event={"ID":"e6966404-cd7f-426d-ab2e-f7a6cf2c8959","Type":"ContainerStarted","Data":"1d1c2e2ab2f6e8928315bab52ba427bb15825cfd8d59b889d83d50daadabc8de"} Jan 22 16:49:44 crc kubenswrapper[4704]: I0122 16:49:44.420640 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerStarted","Data":"ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e"} Jan 22 16:49:44 crc kubenswrapper[4704]: I0122 16:49:44.436392 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" podStartSLOduration=2.436361816 podStartE2EDuration="2.436361816s" podCreationTimestamp="2026-01-22 16:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:49:44.433570366 +0000 UTC m=+1277.078117086" watchObservedRunningTime="2026-01-22 16:49:44.436361816 +0000 UTC m=+1277.080908536" Jan 22 16:49:48 crc kubenswrapper[4704]: I0122 16:49:48.458144 4704 generic.go:334] "Generic (PLEG): container finished" podID="e6966404-cd7f-426d-ab2e-f7a6cf2c8959" containerID="51aba93cbc57783b7925f69e5e1b668a2a53d2b7e61ea22b550798c72b4c6bb5" exitCode=0 Jan 22 16:49:48 crc kubenswrapper[4704]: I0122 16:49:48.458197 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" event={"ID":"e6966404-cd7f-426d-ab2e-f7a6cf2c8959","Type":"ContainerDied","Data":"51aba93cbc57783b7925f69e5e1b668a2a53d2b7e61ea22b550798c72b4c6bb5"} Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.086132 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.086452 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.841096 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.987391 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-credential-keys\") pod \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.987448 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-fernet-keys\") pod \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.987491 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-config-data\") pod \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.987533 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29jx5\" (UniqueName: \"kubernetes.io/projected/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-kube-api-access-29jx5\") pod \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.987675 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-scripts\") pod \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.987708 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-combined-ca-bundle\") pod \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\" (UID: \"e6966404-cd7f-426d-ab2e-f7a6cf2c8959\") " Jan 22 16:49:49 crc kubenswrapper[4704]: I0122 16:49:49.992241 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e6966404-cd7f-426d-ab2e-f7a6cf2c8959" (UID: "e6966404-cd7f-426d-ab2e-f7a6cf2c8959"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.006386 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-scripts" (OuterVolumeSpecName: "scripts") pod "e6966404-cd7f-426d-ab2e-f7a6cf2c8959" (UID: "e6966404-cd7f-426d-ab2e-f7a6cf2c8959"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.006450 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e6966404-cd7f-426d-ab2e-f7a6cf2c8959" (UID: "e6966404-cd7f-426d-ab2e-f7a6cf2c8959"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.006688 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-kube-api-access-29jx5" (OuterVolumeSpecName: "kube-api-access-29jx5") pod "e6966404-cd7f-426d-ab2e-f7a6cf2c8959" (UID: "e6966404-cd7f-426d-ab2e-f7a6cf2c8959"). InnerVolumeSpecName "kube-api-access-29jx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.028012 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-config-data" (OuterVolumeSpecName: "config-data") pod "e6966404-cd7f-426d-ab2e-f7a6cf2c8959" (UID: "e6966404-cd7f-426d-ab2e-f7a6cf2c8959"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.032706 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6966404-cd7f-426d-ab2e-f7a6cf2c8959" (UID: "e6966404-cd7f-426d-ab2e-f7a6cf2c8959"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.088880 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.089085 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29jx5\" (UniqueName: \"kubernetes.io/projected/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-kube-api-access-29jx5\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.089156 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.089214 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.089272 4704 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.089332 4704 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6966404-cd7f-426d-ab2e-f7a6cf2c8959-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.484231 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerStarted","Data":"fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0"} Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.488574 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" event={"ID":"e6966404-cd7f-426d-ab2e-f7a6cf2c8959","Type":"ContainerDied","Data":"1d1c2e2ab2f6e8928315bab52ba427bb15825cfd8d59b889d83d50daadabc8de"} Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.488607 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d1c2e2ab2f6e8928315bab52ba427bb15825cfd8d59b889d83d50daadabc8de" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.488641 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-jmxpt" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.694142 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-7747c9fb6-l9n4v"] Jan 22 16:49:50 crc kubenswrapper[4704]: E0122 16:49:50.694495 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6966404-cd7f-426d-ab2e-f7a6cf2c8959" containerName="keystone-bootstrap" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.694516 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6966404-cd7f-426d-ab2e-f7a6cf2c8959" containerName="keystone-bootstrap" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.694685 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6966404-cd7f-426d-ab2e-f7a6cf2c8959" containerName="keystone-bootstrap" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.695236 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698223 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-fernet-keys\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698339 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-combined-ca-bundle\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698385 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-internal-tls-certs\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698388 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698412 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g9bp\" (UniqueName: \"kubernetes.io/projected/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-kube-api-access-2g9bp\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698458 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-scripts\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698487 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-credential-keys\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698550 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-public-tls-certs\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.698586 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-config-data\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.699392 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.699586 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.699734 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.700092 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-7ktr4" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.700278 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.720714 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7747c9fb6-l9n4v"] Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799234 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-scripts\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799277 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-credential-keys\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799322 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-public-tls-certs\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799347 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-config-data\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799380 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-fernet-keys\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799414 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-combined-ca-bundle\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799429 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-internal-tls-certs\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.799448 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g9bp\" (UniqueName: \"kubernetes.io/projected/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-kube-api-access-2g9bp\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.805174 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-credential-keys\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.805385 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-config-data\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.805935 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-internal-tls-certs\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.806594 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-fernet-keys\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.807818 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-scripts\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.811292 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-public-tls-certs\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.811532 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-combined-ca-bundle\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:50 crc kubenswrapper[4704]: I0122 16:49:50.818509 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g9bp\" (UniqueName: \"kubernetes.io/projected/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-kube-api-access-2g9bp\") pod \"keystone-7747c9fb6-l9n4v\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:51 crc kubenswrapper[4704]: I0122 16:49:51.013087 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:51 crc kubenswrapper[4704]: I0122 16:49:51.495316 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7747c9fb6-l9n4v"] Jan 22 16:49:51 crc kubenswrapper[4704]: I0122 16:49:51.501855 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" event={"ID":"2c642fb5-a73d-47db-8dc4-dcb7c13c876d","Type":"ContainerStarted","Data":"877ba6166b298fb5a28ed4d7eaea6a0199af922492513723c3a6d864d4653709"} Jan 22 16:49:52 crc kubenswrapper[4704]: I0122 16:49:52.513416 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" event={"ID":"2c642fb5-a73d-47db-8dc4-dcb7c13c876d","Type":"ContainerStarted","Data":"057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde"} Jan 22 16:49:52 crc kubenswrapper[4704]: I0122 16:49:52.514751 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:49:52 crc kubenswrapper[4704]: I0122 16:49:52.549083 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" podStartSLOduration=2.549063779 podStartE2EDuration="2.549063779s" podCreationTimestamp="2026-01-22 16:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:49:52.543870462 +0000 UTC m=+1285.188417192" watchObservedRunningTime="2026-01-22 16:49:52.549063779 +0000 UTC m=+1285.193610489" Jan 22 16:50:00 crc kubenswrapper[4704]: I0122 16:50:00.575702 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerStarted","Data":"ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd"} Jan 22 16:50:00 crc kubenswrapper[4704]: I0122 16:50:00.578092 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:00 crc kubenswrapper[4704]: I0122 16:50:00.576500 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="proxy-httpd" containerID="cri-o://ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd" gracePeriod=30 Jan 22 16:50:00 crc kubenswrapper[4704]: I0122 16:50:00.576517 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="sg-core" containerID="cri-o://fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0" gracePeriod=30 Jan 22 16:50:00 crc kubenswrapper[4704]: I0122 16:50:00.576532 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-notification-agent" containerID="cri-o://ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e" gracePeriod=30 Jan 22 16:50:00 crc kubenswrapper[4704]: I0122 16:50:00.576088 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-central-agent" containerID="cri-o://034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c" gracePeriod=30 Jan 22 16:50:00 crc kubenswrapper[4704]: I0122 16:50:00.608736 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.244624724 podStartE2EDuration="26.608715199s" podCreationTimestamp="2026-01-22 16:49:34 +0000 UTC" firstStartedPulling="2026-01-22 16:49:35.820910157 +0000 UTC m=+1268.465456857" lastFinishedPulling="2026-01-22 16:50:00.185000612 +0000 UTC m=+1292.829547332" observedRunningTime="2026-01-22 16:50:00.600865686 +0000 UTC m=+1293.245412406" watchObservedRunningTime="2026-01-22 16:50:00.608715199 +0000 UTC m=+1293.253261899" Jan 22 16:50:01 crc kubenswrapper[4704]: I0122 16:50:01.589328 4704 generic.go:334] "Generic (PLEG): container finished" podID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerID="ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd" exitCode=0 Jan 22 16:50:01 crc kubenswrapper[4704]: I0122 16:50:01.589357 4704 generic.go:334] "Generic (PLEG): container finished" podID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerID="fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0" exitCode=2 Jan 22 16:50:01 crc kubenswrapper[4704]: I0122 16:50:01.589365 4704 generic.go:334] "Generic (PLEG): container finished" podID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerID="034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c" exitCode=0 Jan 22 16:50:01 crc kubenswrapper[4704]: I0122 16:50:01.590081 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerDied","Data":"ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd"} Jan 22 16:50:01 crc kubenswrapper[4704]: I0122 16:50:01.590122 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerDied","Data":"fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0"} Jan 22 16:50:01 crc kubenswrapper[4704]: I0122 16:50:01.590132 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerDied","Data":"034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c"} Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.225340 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.316553 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-log-httpd\") pod \"5f252b90-0bee-45f5-b28d-9cb41b6de684\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.316868 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-combined-ca-bundle\") pod \"5f252b90-0bee-45f5-b28d-9cb41b6de684\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.316904 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-config-data\") pod \"5f252b90-0bee-45f5-b28d-9cb41b6de684\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.316935 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4cpl\" (UniqueName: \"kubernetes.io/projected/5f252b90-0bee-45f5-b28d-9cb41b6de684-kube-api-access-r4cpl\") pod \"5f252b90-0bee-45f5-b28d-9cb41b6de684\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.317011 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-sg-core-conf-yaml\") pod \"5f252b90-0bee-45f5-b28d-9cb41b6de684\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.317057 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-scripts\") pod \"5f252b90-0bee-45f5-b28d-9cb41b6de684\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.317145 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-run-httpd\") pod \"5f252b90-0bee-45f5-b28d-9cb41b6de684\" (UID: \"5f252b90-0bee-45f5-b28d-9cb41b6de684\") " Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.317155 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5f252b90-0bee-45f5-b28d-9cb41b6de684" (UID: "5f252b90-0bee-45f5-b28d-9cb41b6de684"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.317706 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5f252b90-0bee-45f5-b28d-9cb41b6de684" (UID: "5f252b90-0bee-45f5-b28d-9cb41b6de684"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.318179 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.318231 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f252b90-0bee-45f5-b28d-9cb41b6de684-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.322204 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f252b90-0bee-45f5-b28d-9cb41b6de684-kube-api-access-r4cpl" (OuterVolumeSpecName: "kube-api-access-r4cpl") pod "5f252b90-0bee-45f5-b28d-9cb41b6de684" (UID: "5f252b90-0bee-45f5-b28d-9cb41b6de684"). InnerVolumeSpecName "kube-api-access-r4cpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.322452 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-scripts" (OuterVolumeSpecName: "scripts") pod "5f252b90-0bee-45f5-b28d-9cb41b6de684" (UID: "5f252b90-0bee-45f5-b28d-9cb41b6de684"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.338165 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5f252b90-0bee-45f5-b28d-9cb41b6de684" (UID: "5f252b90-0bee-45f5-b28d-9cb41b6de684"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.371425 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f252b90-0bee-45f5-b28d-9cb41b6de684" (UID: "5f252b90-0bee-45f5-b28d-9cb41b6de684"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.387832 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-config-data" (OuterVolumeSpecName: "config-data") pod "5f252b90-0bee-45f5-b28d-9cb41b6de684" (UID: "5f252b90-0bee-45f5-b28d-9cb41b6de684"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.419987 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.420034 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.420044 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4cpl\" (UniqueName: \"kubernetes.io/projected/5f252b90-0bee-45f5-b28d-9cb41b6de684-kube-api-access-r4cpl\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.420055 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.420063 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f252b90-0bee-45f5-b28d-9cb41b6de684-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.599948 4704 generic.go:334] "Generic (PLEG): container finished" podID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerID="ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e" exitCode=0 Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.599991 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerDied","Data":"ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e"} Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.600020 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f252b90-0bee-45f5-b28d-9cb41b6de684","Type":"ContainerDied","Data":"92ef9724b83d3ca646ceadf0bb1de238dd3777c5131fb65b09191d83126edc4b"} Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.600023 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.600039 4704 scope.go:117] "RemoveContainer" containerID="ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.620534 4704 scope.go:117] "RemoveContainer" containerID="fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.640695 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.641127 4704 scope.go:117] "RemoveContainer" containerID="ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.649161 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.668691 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.669208 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-central-agent" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669217 4704 scope.go:117] "RemoveContainer" containerID="034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669230 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-central-agent" Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.669316 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="sg-core" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669323 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="sg-core" Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.669333 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="proxy-httpd" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669339 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="proxy-httpd" Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.669349 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-notification-agent" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669354 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-notification-agent" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669532 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="proxy-httpd" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669546 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="sg-core" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669557 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-notification-agent" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.669576 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" containerName="ceilometer-central-agent" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.671227 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.673710 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.674067 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.678720 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.685185 4704 scope.go:117] "RemoveContainer" containerID="ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd" Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.685609 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd\": container with ID starting with ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd not found: ID does not exist" containerID="ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.685707 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd"} err="failed to get container status \"ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd\": rpc error: code = NotFound desc = could not find container \"ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd\": container with ID starting with ffd068cb51affffa998e2561411e4b6105de9998291b05bd83cbed43f53283dd not found: ID does not exist" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.685811 4704 scope.go:117] "RemoveContainer" containerID="fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0" Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.686177 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0\": container with ID starting with fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0 not found: ID does not exist" containerID="fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.686232 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0"} err="failed to get container status \"fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0\": rpc error: code = NotFound desc = could not find container \"fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0\": container with ID starting with fd5bab2320be88bc562b7afdaa571245dfa36662897181c6d89ac303adba74e0 not found: ID does not exist" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.686252 4704 scope.go:117] "RemoveContainer" containerID="ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e" Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.686476 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e\": container with ID starting with ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e not found: ID does not exist" containerID="ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.686556 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e"} err="failed to get container status \"ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e\": rpc error: code = NotFound desc = could not find container \"ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e\": container with ID starting with ec9e86558aecf1441c3da7359fda4e8809f1e8a5bdf0bb6ccd914eff68065c6e not found: ID does not exist" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.686643 4704 scope.go:117] "RemoveContainer" containerID="034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c" Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.687005 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c\": container with ID starting with 034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c not found: ID does not exist" containerID="034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.687027 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c"} err="failed to get container status \"034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c\": rpc error: code = NotFound desc = could not find container \"034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c\": container with ID starting with 034a6974887c41d794ecf209b0bb193ba584943ab46e5f7bad9503e5b0edea0c not found: ID does not exist" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.723715 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-log-httpd\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.723957 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mmx\" (UniqueName: \"kubernetes.io/projected/9e0bbb10-e314-4591-a64e-4e6a56abfef0-kube-api-access-98mmx\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.724062 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.724255 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.724312 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-run-httpd\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.724460 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-scripts\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.724571 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-config-data\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.744370 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:02 crc kubenswrapper[4704]: E0122 16:50:02.744919 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-98mmx log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/ceilometer-0" podUID="9e0bbb10-e314-4591-a64e-4e6a56abfef0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.826393 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98mmx\" (UniqueName: \"kubernetes.io/projected/9e0bbb10-e314-4591-a64e-4e6a56abfef0-kube-api-access-98mmx\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.826464 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.826498 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.826517 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-run-httpd\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.826554 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-scripts\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.826584 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-config-data\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.826616 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-log-httpd\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.827196 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-log-httpd\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.827314 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-run-httpd\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.830388 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.831202 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-scripts\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.831968 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-config-data\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.839365 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:02 crc kubenswrapper[4704]: I0122 16:50:02.861284 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mmx\" (UniqueName: \"kubernetes.io/projected/9e0bbb10-e314-4591-a64e-4e6a56abfef0-kube-api-access-98mmx\") pod \"ceilometer-0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.607262 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.618165 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637129 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-config-data\") pod \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637183 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-combined-ca-bundle\") pod \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637207 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-log-httpd\") pod \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637300 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98mmx\" (UniqueName: \"kubernetes.io/projected/9e0bbb10-e314-4591-a64e-4e6a56abfef0-kube-api-access-98mmx\") pod \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637350 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-sg-core-conf-yaml\") pod \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637401 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-scripts\") pod \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637439 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-run-httpd\") pod \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\" (UID: \"9e0bbb10-e314-4591-a64e-4e6a56abfef0\") " Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637707 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e0bbb10-e314-4591-a64e-4e6a56abfef0" (UID: "9e0bbb10-e314-4591-a64e-4e6a56abfef0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.637958 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e0bbb10-e314-4591-a64e-4e6a56abfef0" (UID: "9e0bbb10-e314-4591-a64e-4e6a56abfef0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.641280 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-config-data" (OuterVolumeSpecName: "config-data") pod "9e0bbb10-e314-4591-a64e-4e6a56abfef0" (UID: "9e0bbb10-e314-4591-a64e-4e6a56abfef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.641559 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e0bbb10-e314-4591-a64e-4e6a56abfef0" (UID: "9e0bbb10-e314-4591-a64e-4e6a56abfef0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.642642 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e0bbb10-e314-4591-a64e-4e6a56abfef0" (UID: "9e0bbb10-e314-4591-a64e-4e6a56abfef0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.645575 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f252b90-0bee-45f5-b28d-9cb41b6de684" path="/var/lib/kubelet/pods/5f252b90-0bee-45f5-b28d-9cb41b6de684/volumes" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.646972 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0bbb10-e314-4591-a64e-4e6a56abfef0-kube-api-access-98mmx" (OuterVolumeSpecName: "kube-api-access-98mmx") pod "9e0bbb10-e314-4591-a64e-4e6a56abfef0" (UID: "9e0bbb10-e314-4591-a64e-4e6a56abfef0"). InnerVolumeSpecName "kube-api-access-98mmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.647279 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-scripts" (OuterVolumeSpecName: "scripts") pod "9e0bbb10-e314-4591-a64e-4e6a56abfef0" (UID: "9e0bbb10-e314-4591-a64e-4e6a56abfef0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.738920 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.738954 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.738964 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.738974 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e0bbb10-e314-4591-a64e-4e6a56abfef0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.738983 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98mmx\" (UniqueName: \"kubernetes.io/projected/9e0bbb10-e314-4591-a64e-4e6a56abfef0-kube-api-access-98mmx\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.738992 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:03 crc kubenswrapper[4704]: I0122 16:50:03.739074 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0bbb10-e314-4591-a64e-4e6a56abfef0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.613655 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.666269 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.679265 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.696619 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.699974 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.702065 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.702308 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.712268 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.752622 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-scripts\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.752935 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-log-httpd\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.752964 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.753001 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-config-data\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.753024 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22sb6\" (UniqueName: \"kubernetes.io/projected/8742d864-1ca8-492c-aa46-17ea42cf343d-kube-api-access-22sb6\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.753073 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-run-httpd\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.753103 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.854038 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-log-httpd\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.854080 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.854129 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-config-data\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.854164 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22sb6\" (UniqueName: \"kubernetes.io/projected/8742d864-1ca8-492c-aa46-17ea42cf343d-kube-api-access-22sb6\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.854210 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-run-httpd\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.854244 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.854305 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-scripts\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.855022 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-log-httpd\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.855088 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-run-httpd\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.859752 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.859910 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-config-data\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.867082 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.868396 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-scripts\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:04 crc kubenswrapper[4704]: I0122 16:50:04.873058 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22sb6\" (UniqueName: \"kubernetes.io/projected/8742d864-1ca8-492c-aa46-17ea42cf343d-kube-api-access-22sb6\") pod \"ceilometer-0\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:05 crc kubenswrapper[4704]: I0122 16:50:05.028365 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:05 crc kubenswrapper[4704]: I0122 16:50:05.461409 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:05 crc kubenswrapper[4704]: I0122 16:50:05.626919 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerStarted","Data":"b187c6170d7130f075c2307d10a4f1ae78d6998f19bce3c885e787921ce7f990"} Jan 22 16:50:05 crc kubenswrapper[4704]: I0122 16:50:05.643805 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0bbb10-e314-4591-a64e-4e6a56abfef0" path="/var/lib/kubelet/pods/9e0bbb10-e314-4591-a64e-4e6a56abfef0/volumes" Jan 22 16:50:06 crc kubenswrapper[4704]: I0122 16:50:06.682387 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerStarted","Data":"e21988482cfe5a530f2807198b32fb8c66343b73637ae620f62e5a56f696c761"} Jan 22 16:50:07 crc kubenswrapper[4704]: I0122 16:50:07.692460 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerStarted","Data":"1ab07f15f9f7d79cc20cd3b5437622859b9b405bd61d23b64a8f3d88a122e511"} Jan 22 16:50:07 crc kubenswrapper[4704]: I0122 16:50:07.693005 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerStarted","Data":"037a12f83a4703e5c846db3a1499ea00f45c0d0f52cd8b56481bb0203afb39ae"} Jan 22 16:50:08 crc kubenswrapper[4704]: I0122 16:50:08.700864 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerStarted","Data":"484a206d5fefd298d2625280c2d7b1944e23f61d3a534fb26fb554281d74fb85"} Jan 22 16:50:08 crc kubenswrapper[4704]: I0122 16:50:08.701169 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:08 crc kubenswrapper[4704]: I0122 16:50:08.722130 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.826405606 podStartE2EDuration="4.722109211s" podCreationTimestamp="2026-01-22 16:50:04 +0000 UTC" firstStartedPulling="2026-01-22 16:50:05.467069686 +0000 UTC m=+1298.111616386" lastFinishedPulling="2026-01-22 16:50:08.362773291 +0000 UTC m=+1301.007319991" observedRunningTime="2026-01-22 16:50:08.717651102 +0000 UTC m=+1301.362197802" watchObservedRunningTime="2026-01-22 16:50:08.722109211 +0000 UTC m=+1301.366655911" Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.086573 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.087187 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.087244 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.088089 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33c05c7b04e52a99d7618873c0e8cfbae6126223bfd8e14eabf1b1f805e4a907"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.088182 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://33c05c7b04e52a99d7618873c0e8cfbae6126223bfd8e14eabf1b1f805e4a907" gracePeriod=600 Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.788988 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="33c05c7b04e52a99d7618873c0e8cfbae6126223bfd8e14eabf1b1f805e4a907" exitCode=0 Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.789399 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"33c05c7b04e52a99d7618873c0e8cfbae6126223bfd8e14eabf1b1f805e4a907"} Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.789425 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435"} Jan 22 16:50:19 crc kubenswrapper[4704]: I0122 16:50:19.789441 4704 scope.go:117] "RemoveContainer" containerID="88cf191bb3e64eb833ed16834e1430c8c271d9cb96c329f4eba42d0922f7467f" Jan 22 16:50:22 crc kubenswrapper[4704]: I0122 16:50:22.685552 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.293608 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.295225 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.297172 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.301693 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-cn2z7" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.303276 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.304769 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.378651 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9df6412a-01ed-4d5c-826e-956eb7aca29e-openstack-config-secret\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.378691 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwv6q\" (UniqueName: \"kubernetes.io/projected/9df6412a-01ed-4d5c-826e-956eb7aca29e-kube-api-access-mwv6q\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.378717 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9df6412a-01ed-4d5c-826e-956eb7aca29e-openstack-config\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.379225 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df6412a-01ed-4d5c-826e-956eb7aca29e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.481087 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df6412a-01ed-4d5c-826e-956eb7aca29e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.481169 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9df6412a-01ed-4d5c-826e-956eb7aca29e-openstack-config-secret\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.481200 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwv6q\" (UniqueName: \"kubernetes.io/projected/9df6412a-01ed-4d5c-826e-956eb7aca29e-kube-api-access-mwv6q\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.481221 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9df6412a-01ed-4d5c-826e-956eb7aca29e-openstack-config\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.482727 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9df6412a-01ed-4d5c-826e-956eb7aca29e-openstack-config\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.489313 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9df6412a-01ed-4d5c-826e-956eb7aca29e-openstack-config-secret\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.489453 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df6412a-01ed-4d5c-826e-956eb7aca29e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.497629 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwv6q\" (UniqueName: \"kubernetes.io/projected/9df6412a-01ed-4d5c-826e-956eb7aca29e-kube-api-access-mwv6q\") pod \"openstackclient\" (UID: \"9df6412a-01ed-4d5c-826e-956eb7aca29e\") " pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:24 crc kubenswrapper[4704]: I0122 16:50:24.611317 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 16:50:25 crc kubenswrapper[4704]: I0122 16:50:25.052899 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 16:50:25 crc kubenswrapper[4704]: I0122 16:50:25.851367 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"9df6412a-01ed-4d5c-826e-956eb7aca29e","Type":"ContainerStarted","Data":"8799899d75b7972b4c0b5498e5c34deb42069b9fb918d86217dc14762b0e9743"} Jan 22 16:50:35 crc kubenswrapper[4704]: I0122 16:50:35.034428 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:35 crc kubenswrapper[4704]: I0122 16:50:35.944853 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"9df6412a-01ed-4d5c-826e-956eb7aca29e","Type":"ContainerStarted","Data":"99a66c6f6b4bd8622b04e51ea8125d09d02e6e4bcf369dc02be928bad1d1e99a"} Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.497836 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.888476738 podStartE2EDuration="13.497787642s" podCreationTimestamp="2026-01-22 16:50:24 +0000 UTC" firstStartedPulling="2026-01-22 16:50:25.055159744 +0000 UTC m=+1317.699706444" lastFinishedPulling="2026-01-22 16:50:35.664470648 +0000 UTC m=+1328.309017348" observedRunningTime="2026-01-22 16:50:35.967330293 +0000 UTC m=+1328.611876993" watchObservedRunningTime="2026-01-22 16:50:37.497787642 +0000 UTC m=+1330.142334352" Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.504855 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.505116 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="29d5ab67-1ca3-482c-987e-1f299f728372" containerName="kube-state-metrics" containerID="cri-o://99fb9373addcecd0349506a59cd1d6e42e4816c33e45b8128d6e638b9cc2613f" gracePeriod=30 Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.975343 4704 generic.go:334] "Generic (PLEG): container finished" podID="29d5ab67-1ca3-482c-987e-1f299f728372" containerID="99fb9373addcecd0349506a59cd1d6e42e4816c33e45b8128d6e638b9cc2613f" exitCode=2 Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.975430 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"29d5ab67-1ca3-482c-987e-1f299f728372","Type":"ContainerDied","Data":"99fb9373addcecd0349506a59cd1d6e42e4816c33e45b8128d6e638b9cc2613f"} Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.975597 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"29d5ab67-1ca3-482c-987e-1f299f728372","Type":"ContainerDied","Data":"c94ee0dbaf41470dda083a3ba8be2c23828f111d9da012155cd0e87f33b89f51"} Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.975611 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94ee0dbaf41470dda083a3ba8be2c23828f111d9da012155cd0e87f33b89f51" Jan 22 16:50:37 crc kubenswrapper[4704]: I0122 16:50:37.983507 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.107229 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2qcz\" (UniqueName: \"kubernetes.io/projected/29d5ab67-1ca3-482c-987e-1f299f728372-kube-api-access-s2qcz\") pod \"29d5ab67-1ca3-482c-987e-1f299f728372\" (UID: \"29d5ab67-1ca3-482c-987e-1f299f728372\") " Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.121988 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d5ab67-1ca3-482c-987e-1f299f728372-kube-api-access-s2qcz" (OuterVolumeSpecName: "kube-api-access-s2qcz") pod "29d5ab67-1ca3-482c-987e-1f299f728372" (UID: "29d5ab67-1ca3-482c-987e-1f299f728372"). InnerVolumeSpecName "kube-api-access-s2qcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.208621 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2qcz\" (UniqueName: \"kubernetes.io/projected/29d5ab67-1ca3-482c-987e-1f299f728372-kube-api-access-s2qcz\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.557659 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.559253 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-central-agent" containerID="cri-o://e21988482cfe5a530f2807198b32fb8c66343b73637ae620f62e5a56f696c761" gracePeriod=30 Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.559305 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-notification-agent" containerID="cri-o://037a12f83a4703e5c846db3a1499ea00f45c0d0f52cd8b56481bb0203afb39ae" gracePeriod=30 Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.559305 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="sg-core" containerID="cri-o://1ab07f15f9f7d79cc20cd3b5437622859b9b405bd61d23b64a8f3d88a122e511" gracePeriod=30 Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.559292 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="proxy-httpd" containerID="cri-o://484a206d5fefd298d2625280c2d7b1944e23f61d3a534fb26fb554281d74fb85" gracePeriod=30 Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.986802 4704 generic.go:334] "Generic (PLEG): container finished" podID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerID="484a206d5fefd298d2625280c2d7b1944e23f61d3a534fb26fb554281d74fb85" exitCode=0 Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.987051 4704 generic.go:334] "Generic (PLEG): container finished" podID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerID="1ab07f15f9f7d79cc20cd3b5437622859b9b405bd61d23b64a8f3d88a122e511" exitCode=2 Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.987150 4704 generic.go:334] "Generic (PLEG): container finished" podID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerID="e21988482cfe5a530f2807198b32fb8c66343b73637ae620f62e5a56f696c761" exitCode=0 Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.986887 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerDied","Data":"484a206d5fefd298d2625280c2d7b1944e23f61d3a534fb26fb554281d74fb85"} Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.987360 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerDied","Data":"1ab07f15f9f7d79cc20cd3b5437622859b9b405bd61d23b64a8f3d88a122e511"} Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.987377 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerDied","Data":"e21988482cfe5a530f2807198b32fb8c66343b73637ae620f62e5a56f696c761"} Jan 22 16:50:38 crc kubenswrapper[4704]: I0122 16:50:38.987505 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.014964 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.022305 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.035180 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:50:39 crc kubenswrapper[4704]: E0122 16:50:39.035757 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d5ab67-1ca3-482c-987e-1f299f728372" containerName="kube-state-metrics" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.035899 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d5ab67-1ca3-482c-987e-1f299f728372" containerName="kube-state-metrics" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.036187 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d5ab67-1ca3-482c-987e-1f299f728372" containerName="kube-state-metrics" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.037021 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.040664 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.041140 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.049695 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.119449 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.119521 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.119552 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.119600 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2qbw\" (UniqueName: \"kubernetes.io/projected/d513054b-70e9-4e87-99ab-934736abc0bc-kube-api-access-v2qbw\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.220741 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.221020 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.221163 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.221303 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2qbw\" (UniqueName: \"kubernetes.io/projected/d513054b-70e9-4e87-99ab-934736abc0bc-kube-api-access-v2qbw\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.226571 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.226579 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.226705 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d513054b-70e9-4e87-99ab-934736abc0bc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.255599 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2qbw\" (UniqueName: \"kubernetes.io/projected/d513054b-70e9-4e87-99ab-934736abc0bc-kube-api-access-v2qbw\") pod \"kube-state-metrics-0\" (UID: \"d513054b-70e9-4e87-99ab-934736abc0bc\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:39 crc kubenswrapper[4704]: I0122 16:50:39.357644 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:39.664702 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d5ab67-1ca3-482c-987e-1f299f728372" path="/var/lib/kubelet/pods/29d5ab67-1ca3-482c-987e-1f299f728372/volumes" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:39.672394 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 16:50:40 crc kubenswrapper[4704]: W0122 16:50:39.675848 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd513054b_70e9_4e87_99ab_934736abc0bc.slice/crio-0654961a33358f3bca14d27f3a7ddd2dad8759ab489ac171557eb26f9ff4f97b WatchSource:0}: Error finding container 0654961a33358f3bca14d27f3a7ddd2dad8759ab489ac171557eb26f9ff4f97b: Status 404 returned error can't find the container with id 0654961a33358f3bca14d27f3a7ddd2dad8759ab489ac171557eb26f9ff4f97b Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:39.996621 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"d513054b-70e9-4e87-99ab-934736abc0bc","Type":"ContainerStarted","Data":"0654961a33358f3bca14d27f3a7ddd2dad8759ab489ac171557eb26f9ff4f97b"} Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.000285 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-m6nnt"] Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.001594 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.012755 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-m6nnt"] Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.113776 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf"] Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.115317 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.119073 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.122727 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf"] Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.143876 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d25c2724-3fdb-4b6a-b468-5b8aca733e08-operator-scripts\") pod \"watcher-4c46-account-create-update-6vrgf\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.143942 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec44ad67-44af-4b98-b389-0d997a87d8e7-operator-scripts\") pod \"watcher-db-create-m6nnt\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.144011 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/d25c2724-3fdb-4b6a-b468-5b8aca733e08-kube-api-access-7xrf7\") pod \"watcher-4c46-account-create-update-6vrgf\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.144051 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzfgp\" (UniqueName: \"kubernetes.io/projected/ec44ad67-44af-4b98-b389-0d997a87d8e7-kube-api-access-dzfgp\") pod \"watcher-db-create-m6nnt\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.245384 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/d25c2724-3fdb-4b6a-b468-5b8aca733e08-kube-api-access-7xrf7\") pod \"watcher-4c46-account-create-update-6vrgf\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.245439 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzfgp\" (UniqueName: \"kubernetes.io/projected/ec44ad67-44af-4b98-b389-0d997a87d8e7-kube-api-access-dzfgp\") pod \"watcher-db-create-m6nnt\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.245882 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d25c2724-3fdb-4b6a-b468-5b8aca733e08-operator-scripts\") pod \"watcher-4c46-account-create-update-6vrgf\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.245970 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec44ad67-44af-4b98-b389-0d997a87d8e7-operator-scripts\") pod \"watcher-db-create-m6nnt\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.246822 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d25c2724-3fdb-4b6a-b468-5b8aca733e08-operator-scripts\") pod \"watcher-4c46-account-create-update-6vrgf\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.246892 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec44ad67-44af-4b98-b389-0d997a87d8e7-operator-scripts\") pod \"watcher-db-create-m6nnt\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.262411 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzfgp\" (UniqueName: \"kubernetes.io/projected/ec44ad67-44af-4b98-b389-0d997a87d8e7-kube-api-access-dzfgp\") pod \"watcher-db-create-m6nnt\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.263680 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/d25c2724-3fdb-4b6a-b468-5b8aca733e08-kube-api-access-7xrf7\") pod \"watcher-4c46-account-create-update-6vrgf\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.444152 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.454602 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:40 crc kubenswrapper[4704]: I0122 16:50:40.954968 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-m6nnt"] Jan 22 16:50:41 crc kubenswrapper[4704]: I0122 16:50:41.006946 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-m6nnt" event={"ID":"ec44ad67-44af-4b98-b389-0d997a87d8e7","Type":"ContainerStarted","Data":"64429a4371889bbf6965fee3715c7db67f1fd178a54148bf8c6252c84f6a0061"} Jan 22 16:50:41 crc kubenswrapper[4704]: I0122 16:50:41.010265 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"d513054b-70e9-4e87-99ab-934736abc0bc","Type":"ContainerStarted","Data":"4f38a1cf03c5b3bfa5491568813e05bd396f21a2034de1f7288654892acf8949"} Jan 22 16:50:41 crc kubenswrapper[4704]: I0122 16:50:41.011014 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:50:41 crc kubenswrapper[4704]: I0122 16:50:41.020816 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf"] Jan 22 16:50:41 crc kubenswrapper[4704]: I0122 16:50:41.039301 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=1.6547338329999999 podStartE2EDuration="2.039282169s" podCreationTimestamp="2026-01-22 16:50:39 +0000 UTC" firstStartedPulling="2026-01-22 16:50:39.67737602 +0000 UTC m=+1332.321922720" lastFinishedPulling="2026-01-22 16:50:40.061924356 +0000 UTC m=+1332.706471056" observedRunningTime="2026-01-22 16:50:41.029871808 +0000 UTC m=+1333.674418528" watchObservedRunningTime="2026-01-22 16:50:41.039282169 +0000 UTC m=+1333.683828869" Jan 22 16:50:42 crc kubenswrapper[4704]: I0122 16:50:42.019426 4704 generic.go:334] "Generic (PLEG): container finished" podID="ec44ad67-44af-4b98-b389-0d997a87d8e7" containerID="3160f10dca44b170509667434366190a74b0d800b9a5a17c26f195e0a3e8ab47" exitCode=0 Jan 22 16:50:42 crc kubenswrapper[4704]: I0122 16:50:42.019631 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-m6nnt" event={"ID":"ec44ad67-44af-4b98-b389-0d997a87d8e7","Type":"ContainerDied","Data":"3160f10dca44b170509667434366190a74b0d800b9a5a17c26f195e0a3e8ab47"} Jan 22 16:50:42 crc kubenswrapper[4704]: I0122 16:50:42.022015 4704 generic.go:334] "Generic (PLEG): container finished" podID="d25c2724-3fdb-4b6a-b468-5b8aca733e08" containerID="84c66062483c0742c9b1302ab7ba8990c0e8ba55e393f1e6dbc1cd3556677351" exitCode=0 Jan 22 16:50:42 crc kubenswrapper[4704]: I0122 16:50:42.022105 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" event={"ID":"d25c2724-3fdb-4b6a-b468-5b8aca733e08","Type":"ContainerDied","Data":"84c66062483c0742c9b1302ab7ba8990c0e8ba55e393f1e6dbc1cd3556677351"} Jan 22 16:50:42 crc kubenswrapper[4704]: I0122 16:50:42.022167 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" event={"ID":"d25c2724-3fdb-4b6a-b468-5b8aca733e08","Type":"ContainerStarted","Data":"fd28c5a0f9c82173b6fdc6cd66b22885e484a43b3ab256f22bc606e063508375"} Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.034878 4704 generic.go:334] "Generic (PLEG): container finished" podID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerID="037a12f83a4703e5c846db3a1499ea00f45c0d0f52cd8b56481bb0203afb39ae" exitCode=0 Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.034829 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerDied","Data":"037a12f83a4703e5c846db3a1499ea00f45c0d0f52cd8b56481bb0203afb39ae"} Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.103026 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.299496 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22sb6\" (UniqueName: \"kubernetes.io/projected/8742d864-1ca8-492c-aa46-17ea42cf343d-kube-api-access-22sb6\") pod \"8742d864-1ca8-492c-aa46-17ea42cf343d\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.299835 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-combined-ca-bundle\") pod \"8742d864-1ca8-492c-aa46-17ea42cf343d\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.300000 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-sg-core-conf-yaml\") pod \"8742d864-1ca8-492c-aa46-17ea42cf343d\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.300044 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-log-httpd\") pod \"8742d864-1ca8-492c-aa46-17ea42cf343d\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.300098 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-scripts\") pod \"8742d864-1ca8-492c-aa46-17ea42cf343d\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.300138 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-run-httpd\") pod \"8742d864-1ca8-492c-aa46-17ea42cf343d\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.300167 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-config-data\") pod \"8742d864-1ca8-492c-aa46-17ea42cf343d\" (UID: \"8742d864-1ca8-492c-aa46-17ea42cf343d\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.301450 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8742d864-1ca8-492c-aa46-17ea42cf343d" (UID: "8742d864-1ca8-492c-aa46-17ea42cf343d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.302874 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8742d864-1ca8-492c-aa46-17ea42cf343d" (UID: "8742d864-1ca8-492c-aa46-17ea42cf343d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.305523 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-scripts" (OuterVolumeSpecName: "scripts") pod "8742d864-1ca8-492c-aa46-17ea42cf343d" (UID: "8742d864-1ca8-492c-aa46-17ea42cf343d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.305710 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8742d864-1ca8-492c-aa46-17ea42cf343d-kube-api-access-22sb6" (OuterVolumeSpecName: "kube-api-access-22sb6") pod "8742d864-1ca8-492c-aa46-17ea42cf343d" (UID: "8742d864-1ca8-492c-aa46-17ea42cf343d"). InnerVolumeSpecName "kube-api-access-22sb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.333981 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8742d864-1ca8-492c-aa46-17ea42cf343d" (UID: "8742d864-1ca8-492c-aa46-17ea42cf343d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.367249 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8742d864-1ca8-492c-aa46-17ea42cf343d" (UID: "8742d864-1ca8-492c-aa46-17ea42cf343d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.369943 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.370726 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.388145 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-config-data" (OuterVolumeSpecName: "config-data") pod "8742d864-1ca8-492c-aa46-17ea42cf343d" (UID: "8742d864-1ca8-492c-aa46-17ea42cf343d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.402117 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.402145 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.402156 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.402165 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8742d864-1ca8-492c-aa46-17ea42cf343d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.402173 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.402181 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22sb6\" (UniqueName: \"kubernetes.io/projected/8742d864-1ca8-492c-aa46-17ea42cf343d-kube-api-access-22sb6\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.402191 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8742d864-1ca8-492c-aa46-17ea42cf343d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.503456 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/d25c2724-3fdb-4b6a-b468-5b8aca733e08-kube-api-access-7xrf7\") pod \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.503570 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d25c2724-3fdb-4b6a-b468-5b8aca733e08-operator-scripts\") pod \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\" (UID: \"d25c2724-3fdb-4b6a-b468-5b8aca733e08\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.503619 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec44ad67-44af-4b98-b389-0d997a87d8e7-operator-scripts\") pod \"ec44ad67-44af-4b98-b389-0d997a87d8e7\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.503727 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzfgp\" (UniqueName: \"kubernetes.io/projected/ec44ad67-44af-4b98-b389-0d997a87d8e7-kube-api-access-dzfgp\") pod \"ec44ad67-44af-4b98-b389-0d997a87d8e7\" (UID: \"ec44ad67-44af-4b98-b389-0d997a87d8e7\") " Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.504196 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d25c2724-3fdb-4b6a-b468-5b8aca733e08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d25c2724-3fdb-4b6a-b468-5b8aca733e08" (UID: "d25c2724-3fdb-4b6a-b468-5b8aca733e08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.504337 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec44ad67-44af-4b98-b389-0d997a87d8e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec44ad67-44af-4b98-b389-0d997a87d8e7" (UID: "ec44ad67-44af-4b98-b389-0d997a87d8e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.506928 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25c2724-3fdb-4b6a-b468-5b8aca733e08-kube-api-access-7xrf7" (OuterVolumeSpecName: "kube-api-access-7xrf7") pod "d25c2724-3fdb-4b6a-b468-5b8aca733e08" (UID: "d25c2724-3fdb-4b6a-b468-5b8aca733e08"). InnerVolumeSpecName "kube-api-access-7xrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.507182 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec44ad67-44af-4b98-b389-0d997a87d8e7-kube-api-access-dzfgp" (OuterVolumeSpecName: "kube-api-access-dzfgp") pod "ec44ad67-44af-4b98-b389-0d997a87d8e7" (UID: "ec44ad67-44af-4b98-b389-0d997a87d8e7"). InnerVolumeSpecName "kube-api-access-dzfgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.606379 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzfgp\" (UniqueName: \"kubernetes.io/projected/ec44ad67-44af-4b98-b389-0d997a87d8e7-kube-api-access-dzfgp\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.606440 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xrf7\" (UniqueName: \"kubernetes.io/projected/d25c2724-3fdb-4b6a-b468-5b8aca733e08-kube-api-access-7xrf7\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.606460 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d25c2724-3fdb-4b6a-b468-5b8aca733e08-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:43 crc kubenswrapper[4704]: I0122 16:50:43.606476 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec44ad67-44af-4b98-b389-0d997a87d8e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.042958 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" event={"ID":"d25c2724-3fdb-4b6a-b468-5b8aca733e08","Type":"ContainerDied","Data":"fd28c5a0f9c82173b6fdc6cd66b22885e484a43b3ab256f22bc606e063508375"} Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.042969 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.043004 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd28c5a0f9c82173b6fdc6cd66b22885e484a43b3ab256f22bc606e063508375" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.044625 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-m6nnt" event={"ID":"ec44ad67-44af-4b98-b389-0d997a87d8e7","Type":"ContainerDied","Data":"64429a4371889bbf6965fee3715c7db67f1fd178a54148bf8c6252c84f6a0061"} Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.044654 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64429a4371889bbf6965fee3715c7db67f1fd178a54148bf8c6252c84f6a0061" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.044694 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-m6nnt" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.047495 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8742d864-1ca8-492c-aa46-17ea42cf343d","Type":"ContainerDied","Data":"b187c6170d7130f075c2307d10a4f1ae78d6998f19bce3c885e787921ce7f990"} Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.047535 4704 scope.go:117] "RemoveContainer" containerID="484a206d5fefd298d2625280c2d7b1944e23f61d3a534fb26fb554281d74fb85" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.047564 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.065272 4704 scope.go:117] "RemoveContainer" containerID="1ab07f15f9f7d79cc20cd3b5437622859b9b405bd61d23b64a8f3d88a122e511" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.077836 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.086013 4704 scope.go:117] "RemoveContainer" containerID="037a12f83a4703e5c846db3a1499ea00f45c0d0f52cd8b56481bb0203afb39ae" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.095297 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.108953 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:44 crc kubenswrapper[4704]: E0122 16:50:44.109282 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-central-agent" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109296 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-central-agent" Jan 22 16:50:44 crc kubenswrapper[4704]: E0122 16:50:44.109311 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="proxy-httpd" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109317 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="proxy-httpd" Jan 22 16:50:44 crc kubenswrapper[4704]: E0122 16:50:44.109325 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="sg-core" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109331 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="sg-core" Jan 22 16:50:44 crc kubenswrapper[4704]: E0122 16:50:44.109344 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25c2724-3fdb-4b6a-b468-5b8aca733e08" containerName="mariadb-account-create-update" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109350 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25c2724-3fdb-4b6a-b468-5b8aca733e08" containerName="mariadb-account-create-update" Jan 22 16:50:44 crc kubenswrapper[4704]: E0122 16:50:44.109360 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec44ad67-44af-4b98-b389-0d997a87d8e7" containerName="mariadb-database-create" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109365 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec44ad67-44af-4b98-b389-0d997a87d8e7" containerName="mariadb-database-create" Jan 22 16:50:44 crc kubenswrapper[4704]: E0122 16:50:44.109375 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-notification-agent" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109381 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-notification-agent" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109520 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="sg-core" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109537 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-central-agent" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109544 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec44ad67-44af-4b98-b389-0d997a87d8e7" containerName="mariadb-database-create" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109552 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d25c2724-3fdb-4b6a-b468-5b8aca733e08" containerName="mariadb-account-create-update" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109560 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="proxy-httpd" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.109573 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" containerName="ceilometer-notification-agent" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.110972 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.116060 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.116224 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.117226 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.122978 4704 scope.go:117] "RemoveContainer" containerID="e21988482cfe5a530f2807198b32fb8c66343b73637ae620f62e5a56f696c761" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.127161 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.131369 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.131925 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d26wd\" (UniqueName: \"kubernetes.io/projected/ad0adb95-efd5-4c78-9be8-e3cc68180a88-kube-api-access-d26wd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.132074 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.132122 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-scripts\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.132162 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-run-httpd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.132236 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-config-data\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.132270 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.132323 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-log-httpd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233666 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233734 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-scripts\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233782 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-run-httpd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233849 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-config-data\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233874 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233909 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-log-httpd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233933 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.233970 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d26wd\" (UniqueName: \"kubernetes.io/projected/ad0adb95-efd5-4c78-9be8-e3cc68180a88-kube-api-access-d26wd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.234847 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-run-httpd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.237414 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-log-httpd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.237995 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-scripts\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.241154 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-config-data\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.245466 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.245774 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.246743 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.251552 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d26wd\" (UniqueName: \"kubernetes.io/projected/ad0adb95-efd5-4c78-9be8-e3cc68180a88-kube-api-access-d26wd\") pod \"ceilometer-0\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.451935 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:44 crc kubenswrapper[4704]: W0122 16:50:44.945913 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad0adb95_efd5_4c78_9be8_e3cc68180a88.slice/crio-786473a22dd6c919ac4d83fc20c66cb25972dd4279e0b6ed946d05dbf16f9346 WatchSource:0}: Error finding container 786473a22dd6c919ac4d83fc20c66cb25972dd4279e0b6ed946d05dbf16f9346: Status 404 returned error can't find the container with id 786473a22dd6c919ac4d83fc20c66cb25972dd4279e0b6ed946d05dbf16f9346 Jan 22 16:50:44 crc kubenswrapper[4704]: I0122 16:50:44.948636 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.055808 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerStarted","Data":"786473a22dd6c919ac4d83fc20c66cb25972dd4279e0b6ed946d05dbf16f9346"} Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.426329 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7cw48"] Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.427313 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.431585 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-qjzkl" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.431643 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.440601 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7cw48"] Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.454387 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-config-data\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.454627 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbl47\" (UniqueName: \"kubernetes.io/projected/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-kube-api-access-wbl47\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.454773 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.455010 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.555995 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.556490 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.556609 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-config-data\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.556698 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbl47\" (UniqueName: \"kubernetes.io/projected/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-kube-api-access-wbl47\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.560317 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.560668 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.561100 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-config-data\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.571949 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbl47\" (UniqueName: \"kubernetes.io/projected/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-kube-api-access-wbl47\") pod \"watcher-kuttl-db-sync-7cw48\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.645234 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8742d864-1ca8-492c-aa46-17ea42cf343d" path="/var/lib/kubelet/pods/8742d864-1ca8-492c-aa46-17ea42cf343d/volumes" Jan 22 16:50:45 crc kubenswrapper[4704]: I0122 16:50:45.811103 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:50:46 crc kubenswrapper[4704]: I0122 16:50:46.135391 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerStarted","Data":"2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08"} Jan 22 16:50:46 crc kubenswrapper[4704]: I0122 16:50:46.432388 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7cw48"] Jan 22 16:50:47 crc kubenswrapper[4704]: I0122 16:50:47.148823 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerStarted","Data":"44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58"} Jan 22 16:50:47 crc kubenswrapper[4704]: I0122 16:50:47.149095 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerStarted","Data":"cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4"} Jan 22 16:50:47 crc kubenswrapper[4704]: I0122 16:50:47.150470 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" event={"ID":"83a6e7c6-a592-4c39-a2e8-95a15df5dec8","Type":"ContainerStarted","Data":"c66aa21c240a2b327718a20663729abf0c31188d6d9273037b6e875e3e406bba"} Jan 22 16:50:49 crc kubenswrapper[4704]: I0122 16:50:49.180999 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerStarted","Data":"c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c"} Jan 22 16:50:49 crc kubenswrapper[4704]: I0122 16:50:49.182034 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:50:49 crc kubenswrapper[4704]: I0122 16:50:49.204576 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.138172279 podStartE2EDuration="5.204560925s" podCreationTimestamp="2026-01-22 16:50:44 +0000 UTC" firstStartedPulling="2026-01-22 16:50:44.949146405 +0000 UTC m=+1337.593693105" lastFinishedPulling="2026-01-22 16:50:48.015535051 +0000 UTC m=+1340.660081751" observedRunningTime="2026-01-22 16:50:49.199314254 +0000 UTC m=+1341.843860944" watchObservedRunningTime="2026-01-22 16:50:49.204560925 +0000 UTC m=+1341.849107625" Jan 22 16:50:49 crc kubenswrapper[4704]: I0122 16:50:49.375469 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 16:51:02 crc kubenswrapper[4704]: E0122 16:51:02.975212 4704 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 22 16:51:02 crc kubenswrapper[4704]: E0122 16:51:02.975759 4704 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.196:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 22 16:51:02 crc kubenswrapper[4704]: E0122 16:51:02.975895 4704 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.196:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbl47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-7cw48_watcher-kuttl-default(83a6e7c6-a592-4c39-a2e8-95a15df5dec8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 16:51:02 crc kubenswrapper[4704]: E0122 16:51:02.977084 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" podUID="83a6e7c6-a592-4c39-a2e8-95a15df5dec8" Jan 22 16:51:03 crc kubenswrapper[4704]: E0122 16:51:03.304564 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" podUID="83a6e7c6-a592-4c39-a2e8-95a15df5dec8" Jan 22 16:51:13 crc kubenswrapper[4704]: I0122 16:51:13.639210 4704 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:51:14 crc kubenswrapper[4704]: I0122 16:51:14.386861 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" event={"ID":"83a6e7c6-a592-4c39-a2e8-95a15df5dec8","Type":"ContainerStarted","Data":"89c4e83b4ac48c352d7d9291a182158eb1da884bea85d2ded26f1468caf634d3"} Jan 22 16:51:14 crc kubenswrapper[4704]: I0122 16:51:14.410235 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" podStartSLOduration=2.124540323 podStartE2EDuration="29.410212379s" podCreationTimestamp="2026-01-22 16:50:45 +0000 UTC" firstStartedPulling="2026-01-22 16:50:46.44294646 +0000 UTC m=+1339.087493170" lastFinishedPulling="2026-01-22 16:51:13.728618516 +0000 UTC m=+1366.373165226" observedRunningTime="2026-01-22 16:51:14.401092006 +0000 UTC m=+1367.045638706" watchObservedRunningTime="2026-01-22 16:51:14.410212379 +0000 UTC m=+1367.054759079" Jan 22 16:51:14 crc kubenswrapper[4704]: I0122 16:51:14.459917 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:17 crc kubenswrapper[4704]: I0122 16:51:17.416251 4704 generic.go:334] "Generic (PLEG): container finished" podID="83a6e7c6-a592-4c39-a2e8-95a15df5dec8" containerID="89c4e83b4ac48c352d7d9291a182158eb1da884bea85d2ded26f1468caf634d3" exitCode=0 Jan 22 16:51:17 crc kubenswrapper[4704]: I0122 16:51:17.416400 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" event={"ID":"83a6e7c6-a592-4c39-a2e8-95a15df5dec8","Type":"ContainerDied","Data":"89c4e83b4ac48c352d7d9291a182158eb1da884bea85d2ded26f1468caf634d3"} Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.802837 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.949827 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-config-data\") pod \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.950003 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbl47\" (UniqueName: \"kubernetes.io/projected/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-kube-api-access-wbl47\") pod \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.950077 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-combined-ca-bundle\") pod \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.950182 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-db-sync-config-data\") pod \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\" (UID: \"83a6e7c6-a592-4c39-a2e8-95a15df5dec8\") " Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.955369 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-kube-api-access-wbl47" (OuterVolumeSpecName: "kube-api-access-wbl47") pod "83a6e7c6-a592-4c39-a2e8-95a15df5dec8" (UID: "83a6e7c6-a592-4c39-a2e8-95a15df5dec8"). InnerVolumeSpecName "kube-api-access-wbl47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.958112 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "83a6e7c6-a592-4c39-a2e8-95a15df5dec8" (UID: "83a6e7c6-a592-4c39-a2e8-95a15df5dec8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:18 crc kubenswrapper[4704]: I0122 16:51:18.972984 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83a6e7c6-a592-4c39-a2e8-95a15df5dec8" (UID: "83a6e7c6-a592-4c39-a2e8-95a15df5dec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.006505 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-config-data" (OuterVolumeSpecName: "config-data") pod "83a6e7c6-a592-4c39-a2e8-95a15df5dec8" (UID: "83a6e7c6-a592-4c39-a2e8-95a15df5dec8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.052558 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.052613 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbl47\" (UniqueName: \"kubernetes.io/projected/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-kube-api-access-wbl47\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.052638 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.052656 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/83a6e7c6-a592-4c39-a2e8-95a15df5dec8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.432990 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" event={"ID":"83a6e7c6-a592-4c39-a2e8-95a15df5dec8","Type":"ContainerDied","Data":"c66aa21c240a2b327718a20663729abf0c31188d6d9273037b6e875e3e406bba"} Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.433379 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c66aa21c240a2b327718a20663729abf0c31188d6d9273037b6e875e3e406bba" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.433103 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7cw48" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.771341 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:19 crc kubenswrapper[4704]: E0122 16:51:19.771851 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a6e7c6-a592-4c39-a2e8-95a15df5dec8" containerName="watcher-kuttl-db-sync" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.771876 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a6e7c6-a592-4c39-a2e8-95a15df5dec8" containerName="watcher-kuttl-db-sync" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.772090 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a6e7c6-a592-4c39-a2e8-95a15df5dec8" containerName="watcher-kuttl-db-sync" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.772850 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.776877 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-qjzkl" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.779154 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.780420 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.780891 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.783360 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.793298 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.804490 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.847130 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.848234 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.849771 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.869944 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.870913 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhkms\" (UniqueName: \"kubernetes.io/projected/6817a7b8-b430-403f-a093-ced1531a317c-kube-api-access-dhkms\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.870964 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.870996 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6817a7b8-b430-403f-a093-ced1531a317c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.872313 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.872427 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.872488 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.872556 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.872708 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw48k\" (UniqueName: \"kubernetes.io/projected/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-kube-api-access-fw48k\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.872773 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.881115 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.974618 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.974974 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975009 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975045 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975081 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw48k\" (UniqueName: \"kubernetes.io/projected/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-kube-api-access-fw48k\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975115 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975142 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975183 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975204 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhkms\" (UniqueName: \"kubernetes.io/projected/6817a7b8-b430-403f-a093-ced1531a317c-kube-api-access-dhkms\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975221 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8f5c\" (UniqueName: \"kubernetes.io/projected/baf2b1b1-b40b-4863-80f9-c61d922575c9-kube-api-access-d8f5c\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975240 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975255 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf2b1b1-b40b-4863-80f9-c61d922575c9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975272 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6817a7b8-b430-403f-a093-ced1531a317c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.975295 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.976280 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6817a7b8-b430-403f-a093-ced1531a317c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.976319 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.979162 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.979302 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.979395 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.980180 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.980777 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.995973 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw48k\" (UniqueName: \"kubernetes.io/projected/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-kube-api-access-fw48k\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.997077 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhkms\" (UniqueName: \"kubernetes.io/projected/6817a7b8-b430-403f-a093-ced1531a317c-kube-api-access-dhkms\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:19 crc kubenswrapper[4704]: I0122 16:51:19.997314 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.076877 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.076932 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8f5c\" (UniqueName: \"kubernetes.io/projected/baf2b1b1-b40b-4863-80f9-c61d922575c9-kube-api-access-d8f5c\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.076962 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf2b1b1-b40b-4863-80f9-c61d922575c9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.077024 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.077515 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf2b1b1-b40b-4863-80f9-c61d922575c9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.080466 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.087770 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.093443 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8f5c\" (UniqueName: \"kubernetes.io/projected/baf2b1b1-b40b-4863-80f9-c61d922575c9-kube-api-access-d8f5c\") pod \"watcher-kuttl-applier-0\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.128658 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.138341 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.166375 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.632842 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:20 crc kubenswrapper[4704]: W0122 16:51:20.635655 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod064a6d0f_b46f_4ad6_92b4_8889ec63eda7.slice/crio-f0b915f5c7df0366a8d14b135b5449350c3a576023bac7c337ba36336b8a32f1 WatchSource:0}: Error finding container f0b915f5c7df0366a8d14b135b5449350c3a576023bac7c337ba36336b8a32f1: Status 404 returned error can't find the container with id f0b915f5c7df0366a8d14b135b5449350c3a576023bac7c337ba36336b8a32f1 Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.699888 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:20 crc kubenswrapper[4704]: W0122 16:51:20.712396 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaf2b1b1_b40b_4863_80f9_c61d922575c9.slice/crio-ac76234e5e1107164c78a018f4049ff534de5c9e1645239cd2a72b1582ca92ad WatchSource:0}: Error finding container ac76234e5e1107164c78a018f4049ff534de5c9e1645239cd2a72b1582ca92ad: Status 404 returned error can't find the container with id ac76234e5e1107164c78a018f4049ff534de5c9e1645239cd2a72b1582ca92ad Jan 22 16:51:20 crc kubenswrapper[4704]: W0122 16:51:20.714928 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6817a7b8_b430_403f_a093_ced1531a317c.slice/crio-1f9dcb6bb7012df0772568e48de60191cf86b95db328c26dcdd5d3580ec4a12e WatchSource:0}: Error finding container 1f9dcb6bb7012df0772568e48de60191cf86b95db328c26dcdd5d3580ec4a12e: Status 404 returned error can't find the container with id 1f9dcb6bb7012df0772568e48de60191cf86b95db328c26dcdd5d3580ec4a12e Jan 22 16:51:20 crc kubenswrapper[4704]: I0122 16:51:20.715808 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:21 crc kubenswrapper[4704]: I0122 16:51:21.453645 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6817a7b8-b430-403f-a093-ced1531a317c","Type":"ContainerStarted","Data":"1f9dcb6bb7012df0772568e48de60191cf86b95db328c26dcdd5d3580ec4a12e"} Jan 22 16:51:21 crc kubenswrapper[4704]: I0122 16:51:21.461589 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064a6d0f-b46f-4ad6-92b4-8889ec63eda7","Type":"ContainerStarted","Data":"cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004"} Jan 22 16:51:21 crc kubenswrapper[4704]: I0122 16:51:21.461626 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064a6d0f-b46f-4ad6-92b4-8889ec63eda7","Type":"ContainerStarted","Data":"e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3"} Jan 22 16:51:21 crc kubenswrapper[4704]: I0122 16:51:21.461639 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064a6d0f-b46f-4ad6-92b4-8889ec63eda7","Type":"ContainerStarted","Data":"f0b915f5c7df0366a8d14b135b5449350c3a576023bac7c337ba36336b8a32f1"} Jan 22 16:51:21 crc kubenswrapper[4704]: I0122 16:51:21.461898 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:21 crc kubenswrapper[4704]: I0122 16:51:21.464132 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"baf2b1b1-b40b-4863-80f9-c61d922575c9","Type":"ContainerStarted","Data":"ac76234e5e1107164c78a018f4049ff534de5c9e1645239cd2a72b1582ca92ad"} Jan 22 16:51:21 crc kubenswrapper[4704]: I0122 16:51:21.484638 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.484618225 podStartE2EDuration="2.484618225s" podCreationTimestamp="2026-01-22 16:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:21.482197916 +0000 UTC m=+1374.126744616" watchObservedRunningTime="2026-01-22 16:51:21.484618225 +0000 UTC m=+1374.129164925" Jan 22 16:51:22 crc kubenswrapper[4704]: I0122 16:51:22.477687 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"baf2b1b1-b40b-4863-80f9-c61d922575c9","Type":"ContainerStarted","Data":"2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367"} Jan 22 16:51:22 crc kubenswrapper[4704]: I0122 16:51:22.480256 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6817a7b8-b430-403f-a093-ced1531a317c","Type":"ContainerStarted","Data":"9b260a686676ec830edaee3b53b8db44b46d9c0393fb195e478152ca755aefd7"} Jan 22 16:51:22 crc kubenswrapper[4704]: I0122 16:51:22.501381 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.269754582 podStartE2EDuration="3.501323121s" podCreationTimestamp="2026-01-22 16:51:19 +0000 UTC" firstStartedPulling="2026-01-22 16:51:20.714952018 +0000 UTC m=+1373.359498718" lastFinishedPulling="2026-01-22 16:51:21.946520507 +0000 UTC m=+1374.591067257" observedRunningTime="2026-01-22 16:51:22.49431408 +0000 UTC m=+1375.138860790" watchObservedRunningTime="2026-01-22 16:51:22.501323121 +0000 UTC m=+1375.145869851" Jan 22 16:51:22 crc kubenswrapper[4704]: I0122 16:51:22.531858 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.307542549 podStartE2EDuration="3.530279104s" podCreationTimestamp="2026-01-22 16:51:19 +0000 UTC" firstStartedPulling="2026-01-22 16:51:20.718120349 +0000 UTC m=+1373.362667049" lastFinishedPulling="2026-01-22 16:51:21.940856894 +0000 UTC m=+1374.585403604" observedRunningTime="2026-01-22 16:51:22.521313226 +0000 UTC m=+1375.165859936" watchObservedRunningTime="2026-01-22 16:51:22.530279104 +0000 UTC m=+1375.174825824" Jan 22 16:51:23 crc kubenswrapper[4704]: I0122 16:51:23.644773 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:25 crc kubenswrapper[4704]: I0122 16:51:25.138685 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:25 crc kubenswrapper[4704]: I0122 16:51:25.167130 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.128780 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.138982 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.148390 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.155425 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.167303 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.197442 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.567126 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.570959 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.606018 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:30 crc kubenswrapper[4704]: I0122 16:51:30.609616 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:31 crc kubenswrapper[4704]: I0122 16:51:31.593051 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:31 crc kubenswrapper[4704]: I0122 16:51:31.593609 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-central-agent" containerID="cri-o://2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08" gracePeriod=30 Jan 22 16:51:31 crc kubenswrapper[4704]: I0122 16:51:31.593688 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="proxy-httpd" containerID="cri-o://c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c" gracePeriod=30 Jan 22 16:51:31 crc kubenswrapper[4704]: I0122 16:51:31.593739 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="sg-core" containerID="cri-o://44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58" gracePeriod=30 Jan 22 16:51:31 crc kubenswrapper[4704]: I0122 16:51:31.593785 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-notification-agent" containerID="cri-o://cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4" gracePeriod=30 Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.045335 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7cw48"] Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.051360 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7cw48"] Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.080749 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher4c46-account-delete-gmfl9"] Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.082166 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.095391 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4c46-account-delete-gmfl9"] Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.137110 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.151999 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.218377 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.219774 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkcj\" (UniqueName: \"kubernetes.io/projected/9143eb36-471a-40f7-92b8-7257cce8fc95-kube-api-access-mkkcj\") pod \"watcher4c46-account-delete-gmfl9\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.219891 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9143eb36-471a-40f7-92b8-7257cce8fc95-operator-scripts\") pod \"watcher4c46-account-delete-gmfl9\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.321958 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkkcj\" (UniqueName: \"kubernetes.io/projected/9143eb36-471a-40f7-92b8-7257cce8fc95-kube-api-access-mkkcj\") pod \"watcher4c46-account-delete-gmfl9\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.322043 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9143eb36-471a-40f7-92b8-7257cce8fc95-operator-scripts\") pod \"watcher4c46-account-delete-gmfl9\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.322759 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9143eb36-471a-40f7-92b8-7257cce8fc95-operator-scripts\") pod \"watcher4c46-account-delete-gmfl9\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.348741 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkkcj\" (UniqueName: \"kubernetes.io/projected/9143eb36-471a-40f7-92b8-7257cce8fc95-kube-api-access-mkkcj\") pod \"watcher4c46-account-delete-gmfl9\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.401531 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609352 4704 generic.go:334] "Generic (PLEG): container finished" podID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerID="c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c" exitCode=0 Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609398 4704 generic.go:334] "Generic (PLEG): container finished" podID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerID="44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58" exitCode=2 Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609411 4704 generic.go:334] "Generic (PLEG): container finished" podID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerID="2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08" exitCode=0 Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609662 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-kuttl-api-log" containerID="cri-o://e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3" gracePeriod=30 Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609821 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerDied","Data":"c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c"} Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609851 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerDied","Data":"44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58"} Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609868 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerDied","Data":"2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08"} Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.609978 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="baf2b1b1-b40b-4863-80f9-c61d922575c9" containerName="watcher-applier" containerID="cri-o://2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367" gracePeriod=30 Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.610441 4704 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-qjzkl\" not found" Jan 22 16:51:32 crc kubenswrapper[4704]: I0122 16:51:32.610595 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-api" containerID="cri-o://cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004" gracePeriod=30 Jan 22 16:51:32 crc kubenswrapper[4704]: E0122 16:51:32.728619 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 16:51:32 crc kubenswrapper[4704]: E0122 16:51:32.729250 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data podName:6817a7b8-b430-403f-a093-ced1531a317c nodeName:}" failed. No retries permitted until 2026-01-22 16:51:33.229219479 +0000 UTC m=+1385.873766179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "6817a7b8-b430-403f-a093-ced1531a317c") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.095686 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4c46-account-delete-gmfl9"] Jan 22 16:51:33 crc kubenswrapper[4704]: E0122 16:51:33.242104 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 16:51:33 crc kubenswrapper[4704]: E0122 16:51:33.242172 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data podName:6817a7b8-b430-403f-a093-ced1531a317c nodeName:}" failed. No retries permitted until 2026-01-22 16:51:34.242157959 +0000 UTC m=+1386.886704659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "6817a7b8-b430-403f-a093-ced1531a317c") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.505521 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.548667 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-combined-ca-bundle\") pod \"baf2b1b1-b40b-4863-80f9-c61d922575c9\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.548857 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8f5c\" (UniqueName: \"kubernetes.io/projected/baf2b1b1-b40b-4863-80f9-c61d922575c9-kube-api-access-d8f5c\") pod \"baf2b1b1-b40b-4863-80f9-c61d922575c9\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.548947 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-config-data\") pod \"baf2b1b1-b40b-4863-80f9-c61d922575c9\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.548966 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf2b1b1-b40b-4863-80f9-c61d922575c9-logs\") pod \"baf2b1b1-b40b-4863-80f9-c61d922575c9\" (UID: \"baf2b1b1-b40b-4863-80f9-c61d922575c9\") " Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.549450 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf2b1b1-b40b-4863-80f9-c61d922575c9-logs" (OuterVolumeSpecName: "logs") pod "baf2b1b1-b40b-4863-80f9-c61d922575c9" (UID: "baf2b1b1-b40b-4863-80f9-c61d922575c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.560011 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf2b1b1-b40b-4863-80f9-c61d922575c9-kube-api-access-d8f5c" (OuterVolumeSpecName: "kube-api-access-d8f5c") pod "baf2b1b1-b40b-4863-80f9-c61d922575c9" (UID: "baf2b1b1-b40b-4863-80f9-c61d922575c9"). InnerVolumeSpecName "kube-api-access-d8f5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.592940 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baf2b1b1-b40b-4863-80f9-c61d922575c9" (UID: "baf2b1b1-b40b-4863-80f9-c61d922575c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.612986 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-config-data" (OuterVolumeSpecName: "config-data") pod "baf2b1b1-b40b-4863-80f9-c61d922575c9" (UID: "baf2b1b1-b40b-4863-80f9-c61d922575c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.618008 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" event={"ID":"9143eb36-471a-40f7-92b8-7257cce8fc95","Type":"ContainerStarted","Data":"448c1438da4c7b284c12fca557ea491c97c6bcfa93d7d80f3910b62391eaa940"} Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.618052 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" event={"ID":"9143eb36-471a-40f7-92b8-7257cce8fc95","Type":"ContainerStarted","Data":"c249c3d32183e8f5179ad2c1be1ef1ad0df9a4cfa8102855a4423dd8c2bc5654"} Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.620113 4704 generic.go:334] "Generic (PLEG): container finished" podID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerID="e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3" exitCode=143 Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.620165 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064a6d0f-b46f-4ad6-92b4-8889ec63eda7","Type":"ContainerDied","Data":"e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3"} Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.625379 4704 generic.go:334] "Generic (PLEG): container finished" podID="baf2b1b1-b40b-4863-80f9-c61d922575c9" containerID="2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367" exitCode=0 Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.625403 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.625447 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"baf2b1b1-b40b-4863-80f9-c61d922575c9","Type":"ContainerDied","Data":"2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367"} Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.625478 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"baf2b1b1-b40b-4863-80f9-c61d922575c9","Type":"ContainerDied","Data":"ac76234e5e1107164c78a018f4049ff534de5c9e1645239cd2a72b1582ca92ad"} Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.625496 4704 scope.go:117] "RemoveContainer" containerID="2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.625746 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="6817a7b8-b430-403f-a093-ced1531a317c" containerName="watcher-decision-engine" containerID="cri-o://9b260a686676ec830edaee3b53b8db44b46d9c0393fb195e478152ca755aefd7" gracePeriod=30 Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.660173 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" podStartSLOduration=1.6601481869999999 podStartE2EDuration="1.660148187s" podCreationTimestamp="2026-01-22 16:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:33.648617585 +0000 UTC m=+1386.293164285" watchObservedRunningTime="2026-01-22 16:51:33.660148187 +0000 UTC m=+1386.304694887" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.673117 4704 scope.go:117] "RemoveContainer" containerID="2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367" Jan 22 16:51:33 crc kubenswrapper[4704]: E0122 16:51:33.682463 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367\": container with ID starting with 2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367 not found: ID does not exist" containerID="2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.682581 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367"} err="failed to get container status \"2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367\": rpc error: code = NotFound desc = could not find container \"2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367\": container with ID starting with 2019a724e42c35f4ac3e19dd73e82cdd3d7fb5e369793630a3faf0de47aa9367 not found: ID does not exist" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.682757 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8f5c\" (UniqueName: \"kubernetes.io/projected/baf2b1b1-b40b-4863-80f9-c61d922575c9-kube-api-access-d8f5c\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.684694 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a6e7c6-a592-4c39-a2e8-95a15df5dec8" path="/var/lib/kubelet/pods/83a6e7c6-a592-4c39-a2e8-95a15df5dec8/volumes" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.684848 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.685103 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baf2b1b1-b40b-4863-80f9-c61d922575c9-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.685121 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf2b1b1-b40b-4863-80f9-c61d922575c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.722858 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:33 crc kubenswrapper[4704]: I0122 16:51:33.733426 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.087466 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.196385 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw48k\" (UniqueName: \"kubernetes.io/projected/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-kube-api-access-fw48k\") pod \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.196522 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-config-data\") pod \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.196552 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-custom-prometheus-ca\") pod \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.196641 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-logs\") pod \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.197206 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-logs" (OuterVolumeSpecName: "logs") pod "064a6d0f-b46f-4ad6-92b4-8889ec63eda7" (UID: "064a6d0f-b46f-4ad6-92b4-8889ec63eda7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.197300 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-combined-ca-bundle\") pod \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\" (UID: \"064a6d0f-b46f-4ad6-92b4-8889ec63eda7\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.197852 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.216046 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-kube-api-access-fw48k" (OuterVolumeSpecName: "kube-api-access-fw48k") pod "064a6d0f-b46f-4ad6-92b4-8889ec63eda7" (UID: "064a6d0f-b46f-4ad6-92b4-8889ec63eda7"). InnerVolumeSpecName "kube-api-access-fw48k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.233537 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "064a6d0f-b46f-4ad6-92b4-8889ec63eda7" (UID: "064a6d0f-b46f-4ad6-92b4-8889ec63eda7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.246934 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "064a6d0f-b46f-4ad6-92b4-8889ec63eda7" (UID: "064a6d0f-b46f-4ad6-92b4-8889ec63eda7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.255983 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-config-data" (OuterVolumeSpecName: "config-data") pod "064a6d0f-b46f-4ad6-92b4-8889ec63eda7" (UID: "064a6d0f-b46f-4ad6-92b4-8889ec63eda7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.299441 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.299480 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.299492 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.299503 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw48k\" (UniqueName: \"kubernetes.io/projected/064a6d0f-b46f-4ad6-92b4-8889ec63eda7-kube-api-access-fw48k\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.299560 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.299742 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data podName:6817a7b8-b430-403f-a093-ced1531a317c nodeName:}" failed. No retries permitted until 2026-01-22 16:51:36.299709311 +0000 UTC m=+1388.944256011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "6817a7b8-b430-403f-a093-ced1531a317c") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.325370 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.507173 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-scripts\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.507572 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-combined-ca-bundle\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.507614 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-sg-core-conf-yaml\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.507753 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d26wd\" (UniqueName: \"kubernetes.io/projected/ad0adb95-efd5-4c78-9be8-e3cc68180a88-kube-api-access-d26wd\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.507860 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-log-httpd\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.507918 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-ceilometer-tls-certs\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.508011 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-config-data\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.508113 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-run-httpd\") pod \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\" (UID: \"ad0adb95-efd5-4c78-9be8-e3cc68180a88\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.509523 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.509981 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.516155 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0adb95-efd5-4c78-9be8-e3cc68180a88-kube-api-access-d26wd" (OuterVolumeSpecName: "kube-api-access-d26wd") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "kube-api-access-d26wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.518115 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-scripts" (OuterVolumeSpecName: "scripts") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.537971 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.580577 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.597653 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.610821 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.610853 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.610867 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.610878 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d26wd\" (UniqueName: \"kubernetes.io/projected/ad0adb95-efd5-4c78-9be8-e3cc68180a88-kube-api-access-d26wd\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.610892 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.610904 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.610915 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad0adb95-efd5-4c78-9be8-e3cc68180a88-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.620403 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-config-data" (OuterVolumeSpecName: "config-data") pod "ad0adb95-efd5-4c78-9be8-e3cc68180a88" (UID: "ad0adb95-efd5-4c78-9be8-e3cc68180a88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.640046 4704 generic.go:334] "Generic (PLEG): container finished" podID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerID="cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4" exitCode=0 Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.640127 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerDied","Data":"cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4"} Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.640155 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ad0adb95-efd5-4c78-9be8-e3cc68180a88","Type":"ContainerDied","Data":"786473a22dd6c919ac4d83fc20c66cb25972dd4279e0b6ed946d05dbf16f9346"} Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.640171 4704 scope.go:117] "RemoveContainer" containerID="c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.641479 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.647709 4704 generic.go:334] "Generic (PLEG): container finished" podID="6817a7b8-b430-403f-a093-ced1531a317c" containerID="9b260a686676ec830edaee3b53b8db44b46d9c0393fb195e478152ca755aefd7" exitCode=0 Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.647807 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6817a7b8-b430-403f-a093-ced1531a317c","Type":"ContainerDied","Data":"9b260a686676ec830edaee3b53b8db44b46d9c0393fb195e478152ca755aefd7"} Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.651212 4704 generic.go:334] "Generic (PLEG): container finished" podID="9143eb36-471a-40f7-92b8-7257cce8fc95" containerID="448c1438da4c7b284c12fca557ea491c97c6bcfa93d7d80f3910b62391eaa940" exitCode=0 Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.651519 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" event={"ID":"9143eb36-471a-40f7-92b8-7257cce8fc95","Type":"ContainerDied","Data":"448c1438da4c7b284c12fca557ea491c97c6bcfa93d7d80f3910b62391eaa940"} Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.659538 4704 generic.go:334] "Generic (PLEG): container finished" podID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerID="cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004" exitCode=0 Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.659688 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.659708 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064a6d0f-b46f-4ad6-92b4-8889ec63eda7","Type":"ContainerDied","Data":"cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004"} Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.660237 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"064a6d0f-b46f-4ad6-92b4-8889ec63eda7","Type":"ContainerDied","Data":"f0b915f5c7df0366a8d14b135b5449350c3a576023bac7c337ba36336b8a32f1"} Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.666096 4704 scope.go:117] "RemoveContainer" containerID="44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.711691 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad0adb95-efd5-4c78-9be8-e3cc68180a88-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.712423 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.718702 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.731086 4704 scope.go:117] "RemoveContainer" containerID="cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.731201 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745121 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745441 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf2b1b1-b40b-4863-80f9-c61d922575c9" containerName="watcher-applier" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745453 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf2b1b1-b40b-4863-80f9-c61d922575c9" containerName="watcher-applier" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745477 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-central-agent" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745482 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-central-agent" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745494 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6817a7b8-b430-403f-a093-ced1531a317c" containerName="watcher-decision-engine" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745501 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6817a7b8-b430-403f-a093-ced1531a317c" containerName="watcher-decision-engine" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745516 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-notification-agent" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745522 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-notification-agent" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745536 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-kuttl-api-log" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745542 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-kuttl-api-log" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745552 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="proxy-httpd" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745558 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="proxy-httpd" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745569 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="sg-core" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745575 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="sg-core" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.745582 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-api" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745587 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-api" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745711 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-central-agent" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.745721 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-api" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.746235 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="proxy-httpd" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.746246 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf2b1b1-b40b-4863-80f9-c61d922575c9" containerName="watcher-applier" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.746255 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6817a7b8-b430-403f-a093-ced1531a317c" containerName="watcher-decision-engine" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.746293 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="sg-core" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.746302 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" containerName="ceilometer-notification-agent" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.746313 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" containerName="watcher-kuttl-api-log" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.748144 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.750106 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.750939 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.751080 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.752532 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.759320 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.772826 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.790063 4704 scope.go:117] "RemoveContainer" containerID="2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.810078 4704 scope.go:117] "RemoveContainer" containerID="c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.810615 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c\": container with ID starting with c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c not found: ID does not exist" containerID="c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.810671 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c"} err="failed to get container status \"c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c\": rpc error: code = NotFound desc = could not find container \"c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c\": container with ID starting with c202be22585d22bf9d8be3e64a0dd8307386300d39f2a0fb437b4c181733f87c not found: ID does not exist" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.810705 4704 scope.go:117] "RemoveContainer" containerID="44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.811258 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58\": container with ID starting with 44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58 not found: ID does not exist" containerID="44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.811297 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58"} err="failed to get container status \"44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58\": rpc error: code = NotFound desc = could not find container \"44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58\": container with ID starting with 44f16c04d6d111bb89739c749cfec10153c997e4c3af616c87a2cc52a3a9fc58 not found: ID does not exist" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.811325 4704 scope.go:117] "RemoveContainer" containerID="cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.811596 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4\": container with ID starting with cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4 not found: ID does not exist" containerID="cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.811629 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4"} err="failed to get container status \"cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4\": rpc error: code = NotFound desc = could not find container \"cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4\": container with ID starting with cb0378f6577538545414198fd088a606e8dfd392b86b095f07f90fb4c085bfe4 not found: ID does not exist" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.811655 4704 scope.go:117] "RemoveContainer" containerID="2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.811988 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08\": container with ID starting with 2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08 not found: ID does not exist" containerID="2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.812025 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08"} err="failed to get container status \"2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08\": rpc error: code = NotFound desc = could not find container \"2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08\": container with ID starting with 2e5d9c109af3e6a5b754ec52c69d3e982f01e2bfde885e5696a9d3387de22a08 not found: ID does not exist" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.812046 4704 scope.go:117] "RemoveContainer" containerID="cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.812750 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-run-httpd\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.812807 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.812858 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9sc\" (UniqueName: \"kubernetes.io/projected/345dd774-7383-4014-87e9-461bd165f674-kube-api-access-5c9sc\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.812890 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-log-httpd\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.812913 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.813152 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-scripts\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.813189 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-config-data\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.813219 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.836008 4704 scope.go:117] "RemoveContainer" containerID="e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.868256 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.869761 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-5c9sc log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/ceilometer-0" podUID="345dd774-7383-4014-87e9-461bd165f674" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.870052 4704 scope.go:117] "RemoveContainer" containerID="cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.870680 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004\": container with ID starting with cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004 not found: ID does not exist" containerID="cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.870764 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004"} err="failed to get container status \"cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004\": rpc error: code = NotFound desc = could not find container \"cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004\": container with ID starting with cd2fab48cf7184b998d92495dcaf0a4d3179b266b614e6bde57105c2d6885004 not found: ID does not exist" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.870818 4704 scope.go:117] "RemoveContainer" containerID="e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3" Jan 22 16:51:34 crc kubenswrapper[4704]: E0122 16:51:34.872380 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3\": container with ID starting with e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3 not found: ID does not exist" containerID="e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.872410 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3"} err="failed to get container status \"e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3\": rpc error: code = NotFound desc = could not find container \"e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3\": container with ID starting with e0bd04592353cc79b020f9eaed5dcdb066336eed2416e029ae7fd3f5251488d3 not found: ID does not exist" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914025 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhkms\" (UniqueName: \"kubernetes.io/projected/6817a7b8-b430-403f-a093-ced1531a317c-kube-api-access-dhkms\") pod \"6817a7b8-b430-403f-a093-ced1531a317c\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914111 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6817a7b8-b430-403f-a093-ced1531a317c-logs\") pod \"6817a7b8-b430-403f-a093-ced1531a317c\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914264 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-combined-ca-bundle\") pod \"6817a7b8-b430-403f-a093-ced1531a317c\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914365 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-custom-prometheus-ca\") pod \"6817a7b8-b430-403f-a093-ced1531a317c\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914412 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data\") pod \"6817a7b8-b430-403f-a093-ced1531a317c\" (UID: \"6817a7b8-b430-403f-a093-ced1531a317c\") " Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914490 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6817a7b8-b430-403f-a093-ced1531a317c-logs" (OuterVolumeSpecName: "logs") pod "6817a7b8-b430-403f-a093-ced1531a317c" (UID: "6817a7b8-b430-403f-a093-ced1531a317c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914824 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914875 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9sc\" (UniqueName: \"kubernetes.io/projected/345dd774-7383-4014-87e9-461bd165f674-kube-api-access-5c9sc\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914919 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-log-httpd\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.914947 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.915033 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-scripts\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.915061 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-config-data\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.915084 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.915104 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-run-httpd\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.915162 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6817a7b8-b430-403f-a093-ced1531a317c-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.915592 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-run-httpd\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.915968 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-log-httpd\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.918873 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6817a7b8-b430-403f-a093-ced1531a317c-kube-api-access-dhkms" (OuterVolumeSpecName: "kube-api-access-dhkms") pod "6817a7b8-b430-403f-a093-ced1531a317c" (UID: "6817a7b8-b430-403f-a093-ced1531a317c"). InnerVolumeSpecName "kube-api-access-dhkms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.920274 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.922827 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.923350 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-scripts\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.924672 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-config-data\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.939103 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.939319 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9sc\" (UniqueName: \"kubernetes.io/projected/345dd774-7383-4014-87e9-461bd165f674-kube-api-access-5c9sc\") pod \"ceilometer-0\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.942861 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6817a7b8-b430-403f-a093-ced1531a317c" (UID: "6817a7b8-b430-403f-a093-ced1531a317c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.949040 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "6817a7b8-b430-403f-a093-ced1531a317c" (UID: "6817a7b8-b430-403f-a093-ced1531a317c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:34 crc kubenswrapper[4704]: I0122 16:51:34.968374 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data" (OuterVolumeSpecName: "config-data") pod "6817a7b8-b430-403f-a093-ced1531a317c" (UID: "6817a7b8-b430-403f-a093-ced1531a317c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.017494 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhkms\" (UniqueName: \"kubernetes.io/projected/6817a7b8-b430-403f-a093-ced1531a317c-kube-api-access-dhkms\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.017832 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.017907 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.017959 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6817a7b8-b430-403f-a093-ced1531a317c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.649977 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064a6d0f-b46f-4ad6-92b4-8889ec63eda7" path="/var/lib/kubelet/pods/064a6d0f-b46f-4ad6-92b4-8889ec63eda7/volumes" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.650849 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0adb95-efd5-4c78-9be8-e3cc68180a88" path="/var/lib/kubelet/pods/ad0adb95-efd5-4c78-9be8-e3cc68180a88/volumes" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.652530 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf2b1b1-b40b-4863-80f9-c61d922575c9" path="/var/lib/kubelet/pods/baf2b1b1-b40b-4863-80f9-c61d922575c9/volumes" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.680309 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.681063 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.681699 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"6817a7b8-b430-403f-a093-ced1531a317c","Type":"ContainerDied","Data":"1f9dcb6bb7012df0772568e48de60191cf86b95db328c26dcdd5d3580ec4a12e"} Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.681754 4704 scope.go:117] "RemoveContainer" containerID="9b260a686676ec830edaee3b53b8db44b46d9c0393fb195e478152ca755aefd7" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.714925 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726422 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-run-httpd\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726481 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c9sc\" (UniqueName: \"kubernetes.io/projected/345dd774-7383-4014-87e9-461bd165f674-kube-api-access-5c9sc\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726533 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-sg-core-conf-yaml\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726574 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-log-httpd\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726606 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-config-data\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726647 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-scripts\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726678 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-ceilometer-tls-certs\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.726723 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-combined-ca-bundle\") pod \"345dd774-7383-4014-87e9-461bd165f674\" (UID: \"345dd774-7383-4014-87e9-461bd165f674\") " Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.733246 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.742305 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.752962 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.755877 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.756738 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-config-data" (OuterVolumeSpecName: "config-data") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.756889 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.762019 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-scripts" (OuterVolumeSpecName: "scripts") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.764949 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.767361 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345dd774-7383-4014-87e9-461bd165f674-kube-api-access-5c9sc" (OuterVolumeSpecName: "kube-api-access-5c9sc") pod "345dd774-7383-4014-87e9-461bd165f674" (UID: "345dd774-7383-4014-87e9-461bd165f674"). InnerVolumeSpecName "kube-api-access-5c9sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.786725 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830038 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830084 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830094 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830104 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c9sc\" (UniqueName: \"kubernetes.io/projected/345dd774-7383-4014-87e9-461bd165f674-kube-api-access-5c9sc\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830118 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830129 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345dd774-7383-4014-87e9-461bd165f674-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830138 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:35 crc kubenswrapper[4704]: I0122 16:51:35.830148 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/345dd774-7383-4014-87e9-461bd165f674-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.131814 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.234922 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkkcj\" (UniqueName: \"kubernetes.io/projected/9143eb36-471a-40f7-92b8-7257cce8fc95-kube-api-access-mkkcj\") pod \"9143eb36-471a-40f7-92b8-7257cce8fc95\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.235068 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9143eb36-471a-40f7-92b8-7257cce8fc95-operator-scripts\") pod \"9143eb36-471a-40f7-92b8-7257cce8fc95\" (UID: \"9143eb36-471a-40f7-92b8-7257cce8fc95\") " Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.236272 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9143eb36-471a-40f7-92b8-7257cce8fc95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9143eb36-471a-40f7-92b8-7257cce8fc95" (UID: "9143eb36-471a-40f7-92b8-7257cce8fc95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.249046 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9143eb36-471a-40f7-92b8-7257cce8fc95-kube-api-access-mkkcj" (OuterVolumeSpecName: "kube-api-access-mkkcj") pod "9143eb36-471a-40f7-92b8-7257cce8fc95" (UID: "9143eb36-471a-40f7-92b8-7257cce8fc95"). InnerVolumeSpecName "kube-api-access-mkkcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.336885 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkkcj\" (UniqueName: \"kubernetes.io/projected/9143eb36-471a-40f7-92b8-7257cce8fc95-kube-api-access-mkkcj\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.336915 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9143eb36-471a-40f7-92b8-7257cce8fc95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.689563 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.689572 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.689582 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4c46-account-delete-gmfl9" event={"ID":"9143eb36-471a-40f7-92b8-7257cce8fc95","Type":"ContainerDied","Data":"c249c3d32183e8f5179ad2c1be1ef1ad0df9a4cfa8102855a4423dd8c2bc5654"} Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.690843 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c249c3d32183e8f5179ad2c1be1ef1ad0df9a4cfa8102855a4423dd8c2bc5654" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.744162 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.765377 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.776591 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:36 crc kubenswrapper[4704]: E0122 16:51:36.777053 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143eb36-471a-40f7-92b8-7257cce8fc95" containerName="mariadb-account-delete" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.777076 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143eb36-471a-40f7-92b8-7257cce8fc95" containerName="mariadb-account-delete" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.777362 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143eb36-471a-40f7-92b8-7257cce8fc95" containerName="mariadb-account-delete" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.779252 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.782443 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.782826 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.782904 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.787769 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946192 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-scripts\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946264 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946298 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-config-data\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946473 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946614 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-log-httpd\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946644 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946687 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqh8\" (UniqueName: \"kubernetes.io/projected/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-kube-api-access-mnqh8\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:36 crc kubenswrapper[4704]: I0122 16:51:36.946724 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-run-httpd\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048182 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048246 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-config-data\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048296 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048339 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-log-httpd\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048360 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048397 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqh8\" (UniqueName: \"kubernetes.io/projected/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-kube-api-access-mnqh8\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048608 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-run-httpd\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.048653 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-scripts\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.049772 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-log-httpd\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.049848 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-run-httpd\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.053298 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.053457 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-config-data\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.053775 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.054607 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-scripts\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.055357 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.068448 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqh8\" (UniqueName: \"kubernetes.io/projected/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-kube-api-access-mnqh8\") pod \"ceilometer-0\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.100906 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.538171 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.644599 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345dd774-7383-4014-87e9-461bd165f674" path="/var/lib/kubelet/pods/345dd774-7383-4014-87e9-461bd165f674/volumes" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.645003 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6817a7b8-b430-403f-a093-ced1531a317c" path="/var/lib/kubelet/pods/6817a7b8-b430-403f-a093-ced1531a317c/volumes" Jan 22 16:51:37 crc kubenswrapper[4704]: I0122 16:51:37.697987 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerStarted","Data":"8d0cddb92a0c66e0d5c3ce2801200db325a53c60b2baa73973a3545778da88a2"} Jan 22 16:51:38 crc kubenswrapper[4704]: I0122 16:51:38.707742 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerStarted","Data":"fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88"} Jan 22 16:51:39 crc kubenswrapper[4704]: I0122 16:51:39.717854 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerStarted","Data":"4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8"} Jan 22 16:51:40 crc kubenswrapper[4704]: I0122 16:51:40.730176 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerStarted","Data":"2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c"} Jan 22 16:51:41 crc kubenswrapper[4704]: I0122 16:51:41.765082 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerStarted","Data":"340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366"} Jan 22 16:51:41 crc kubenswrapper[4704]: I0122 16:51:41.765521 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:51:41 crc kubenswrapper[4704]: I0122 16:51:41.792384 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.474716167 podStartE2EDuration="5.792361532s" podCreationTimestamp="2026-01-22 16:51:36 +0000 UTC" firstStartedPulling="2026-01-22 16:51:37.541766951 +0000 UTC m=+1390.186313651" lastFinishedPulling="2026-01-22 16:51:40.859412316 +0000 UTC m=+1393.503959016" observedRunningTime="2026-01-22 16:51:41.784827645 +0000 UTC m=+1394.429374345" watchObservedRunningTime="2026-01-22 16:51:41.792361532 +0000 UTC m=+1394.436908232" Jan 22 16:51:42 crc kubenswrapper[4704]: I0122 16:51:42.109293 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-m6nnt"] Jan 22 16:51:42 crc kubenswrapper[4704]: I0122 16:51:42.114531 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-m6nnt"] Jan 22 16:51:42 crc kubenswrapper[4704]: I0122 16:51:42.122187 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher4c46-account-delete-gmfl9"] Jan 22 16:51:42 crc kubenswrapper[4704]: I0122 16:51:42.128459 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf"] Jan 22 16:51:42 crc kubenswrapper[4704]: I0122 16:51:42.134396 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher4c46-account-delete-gmfl9"] Jan 22 16:51:42 crc kubenswrapper[4704]: I0122 16:51:42.141468 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-4c46-account-create-update-6vrgf"] Jan 22 16:51:43 crc kubenswrapper[4704]: I0122 16:51:43.645444 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9143eb36-471a-40f7-92b8-7257cce8fc95" path="/var/lib/kubelet/pods/9143eb36-471a-40f7-92b8-7257cce8fc95/volumes" Jan 22 16:51:43 crc kubenswrapper[4704]: I0122 16:51:43.646453 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d25c2724-3fdb-4b6a-b468-5b8aca733e08" path="/var/lib/kubelet/pods/d25c2724-3fdb-4b6a-b468-5b8aca733e08/volumes" Jan 22 16:51:43 crc kubenswrapper[4704]: I0122 16:51:43.647022 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec44ad67-44af-4b98-b389-0d997a87d8e7" path="/var/lib/kubelet/pods/ec44ad67-44af-4b98-b389-0d997a87d8e7/volumes" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.121155 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-0468-account-create-update-42rph"] Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.122516 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.124899 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.147130 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-jl977"] Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.148190 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.171228 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0468-account-create-update-42rph"] Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.197105 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jl977"] Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.285346 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsxj\" (UniqueName: \"kubernetes.io/projected/50fc2fb9-9bc4-4f20-8258-2b471827216a-kube-api-access-8wsxj\") pod \"watcher-db-create-jl977\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.285399 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50fc2fb9-9bc4-4f20-8258-2b471827216a-operator-scripts\") pod \"watcher-db-create-jl977\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.285437 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsdj\" (UniqueName: \"kubernetes.io/projected/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-kube-api-access-ptsdj\") pod \"watcher-0468-account-create-update-42rph\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.285500 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-operator-scripts\") pod \"watcher-0468-account-create-update-42rph\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.386779 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-operator-scripts\") pod \"watcher-0468-account-create-update-42rph\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.386911 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsxj\" (UniqueName: \"kubernetes.io/projected/50fc2fb9-9bc4-4f20-8258-2b471827216a-kube-api-access-8wsxj\") pod \"watcher-db-create-jl977\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.386947 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50fc2fb9-9bc4-4f20-8258-2b471827216a-operator-scripts\") pod \"watcher-db-create-jl977\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.386984 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptsdj\" (UniqueName: \"kubernetes.io/projected/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-kube-api-access-ptsdj\") pod \"watcher-0468-account-create-update-42rph\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.387411 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-operator-scripts\") pod \"watcher-0468-account-create-update-42rph\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.388052 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50fc2fb9-9bc4-4f20-8258-2b471827216a-operator-scripts\") pod \"watcher-db-create-jl977\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.408459 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptsdj\" (UniqueName: \"kubernetes.io/projected/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-kube-api-access-ptsdj\") pod \"watcher-0468-account-create-update-42rph\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.408674 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsxj\" (UniqueName: \"kubernetes.io/projected/50fc2fb9-9bc4-4f20-8258-2b471827216a-kube-api-access-8wsxj\") pod \"watcher-db-create-jl977\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.451322 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.468927 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:45 crc kubenswrapper[4704]: I0122 16:51:45.947769 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0468-account-create-update-42rph"] Jan 22 16:51:45 crc kubenswrapper[4704]: W0122 16:51:45.950842 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fc8f4c0_e69d_48db_baf3_7d2ba6682898.slice/crio-ffaf54b9134b40da1e1d3584fbf5803f00f072bc74c42ce97e2ed7a60d0f4853 WatchSource:0}: Error finding container ffaf54b9134b40da1e1d3584fbf5803f00f072bc74c42ce97e2ed7a60d0f4853: Status 404 returned error can't find the container with id ffaf54b9134b40da1e1d3584fbf5803f00f072bc74c42ce97e2ed7a60d0f4853 Jan 22 16:51:46 crc kubenswrapper[4704]: W0122 16:51:46.046220 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50fc2fb9_9bc4_4f20_8258_2b471827216a.slice/crio-052018bb6c2eebe0705dc761138f371418543e24a6acf694163c3cb817dfbcce WatchSource:0}: Error finding container 052018bb6c2eebe0705dc761138f371418543e24a6acf694163c3cb817dfbcce: Status 404 returned error can't find the container with id 052018bb6c2eebe0705dc761138f371418543e24a6acf694163c3cb817dfbcce Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.056084 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jl977"] Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.807064 4704 generic.go:334] "Generic (PLEG): container finished" podID="7fc8f4c0-e69d-48db-baf3-7d2ba6682898" containerID="2b1dbe0213f448866562a96e05a72bc97bc23264ca8ad2d6417d38ded492bdb2" exitCode=0 Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.807515 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" event={"ID":"7fc8f4c0-e69d-48db-baf3-7d2ba6682898","Type":"ContainerDied","Data":"2b1dbe0213f448866562a96e05a72bc97bc23264ca8ad2d6417d38ded492bdb2"} Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.807557 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" event={"ID":"7fc8f4c0-e69d-48db-baf3-7d2ba6682898","Type":"ContainerStarted","Data":"ffaf54b9134b40da1e1d3584fbf5803f00f072bc74c42ce97e2ed7a60d0f4853"} Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.811067 4704 generic.go:334] "Generic (PLEG): container finished" podID="50fc2fb9-9bc4-4f20-8258-2b471827216a" containerID="3602bc548fb24dd57cc5ae10664d11e46749da8779138552bb85719b7fc625b7" exitCode=0 Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.811217 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jl977" event={"ID":"50fc2fb9-9bc4-4f20-8258-2b471827216a","Type":"ContainerDied","Data":"3602bc548fb24dd57cc5ae10664d11e46749da8779138552bb85719b7fc625b7"} Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.811301 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jl977" event={"ID":"50fc2fb9-9bc4-4f20-8258-2b471827216a","Type":"ContainerStarted","Data":"052018bb6c2eebe0705dc761138f371418543e24a6acf694163c3cb817dfbcce"} Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.867691 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c49r4"] Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.869328 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:46 crc kubenswrapper[4704]: I0122 16:51:46.885578 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c49r4"] Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.015607 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-catalog-content\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.015821 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkq4\" (UniqueName: \"kubernetes.io/projected/cb9e1085-03ce-419b-be48-cee95435cc94-kube-api-access-qpkq4\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.015937 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-utilities\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.118273 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-catalog-content\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.118322 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpkq4\" (UniqueName: \"kubernetes.io/projected/cb9e1085-03ce-419b-be48-cee95435cc94-kube-api-access-qpkq4\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.118343 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-utilities\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.118926 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-utilities\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.118971 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-catalog-content\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.141804 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpkq4\" (UniqueName: \"kubernetes.io/projected/cb9e1085-03ce-419b-be48-cee95435cc94-kube-api-access-qpkq4\") pod \"redhat-operators-c49r4\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.189103 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.735015 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c49r4"] Jan 22 16:51:47 crc kubenswrapper[4704]: W0122 16:51:47.736985 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb9e1085_03ce_419b_be48_cee95435cc94.slice/crio-43088049ad8c8ad1ec69eb877c978c027dff99644390c823983d06792ff71dcf WatchSource:0}: Error finding container 43088049ad8c8ad1ec69eb877c978c027dff99644390c823983d06792ff71dcf: Status 404 returned error can't find the container with id 43088049ad8c8ad1ec69eb877c978c027dff99644390c823983d06792ff71dcf Jan 22 16:51:47 crc kubenswrapper[4704]: I0122 16:51:47.824573 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c49r4" event={"ID":"cb9e1085-03ce-419b-be48-cee95435cc94","Type":"ContainerStarted","Data":"43088049ad8c8ad1ec69eb877c978c027dff99644390c823983d06792ff71dcf"} Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.109951 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.204111 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.234440 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wsxj\" (UniqueName: \"kubernetes.io/projected/50fc2fb9-9bc4-4f20-8258-2b471827216a-kube-api-access-8wsxj\") pod \"50fc2fb9-9bc4-4f20-8258-2b471827216a\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.234578 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50fc2fb9-9bc4-4f20-8258-2b471827216a-operator-scripts\") pod \"50fc2fb9-9bc4-4f20-8258-2b471827216a\" (UID: \"50fc2fb9-9bc4-4f20-8258-2b471827216a\") " Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.236137 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fc2fb9-9bc4-4f20-8258-2b471827216a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50fc2fb9-9bc4-4f20-8258-2b471827216a" (UID: "50fc2fb9-9bc4-4f20-8258-2b471827216a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.248921 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50fc2fb9-9bc4-4f20-8258-2b471827216a-kube-api-access-8wsxj" (OuterVolumeSpecName: "kube-api-access-8wsxj") pod "50fc2fb9-9bc4-4f20-8258-2b471827216a" (UID: "50fc2fb9-9bc4-4f20-8258-2b471827216a"). InnerVolumeSpecName "kube-api-access-8wsxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.335933 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-operator-scripts\") pod \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.336090 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptsdj\" (UniqueName: \"kubernetes.io/projected/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-kube-api-access-ptsdj\") pod \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\" (UID: \"7fc8f4c0-e69d-48db-baf3-7d2ba6682898\") " Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.336478 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fc8f4c0-e69d-48db-baf3-7d2ba6682898" (UID: "7fc8f4c0-e69d-48db-baf3-7d2ba6682898"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.336551 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.336565 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wsxj\" (UniqueName: \"kubernetes.io/projected/50fc2fb9-9bc4-4f20-8258-2b471827216a-kube-api-access-8wsxj\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.336575 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50fc2fb9-9bc4-4f20-8258-2b471827216a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.339247 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-kube-api-access-ptsdj" (OuterVolumeSpecName: "kube-api-access-ptsdj") pod "7fc8f4c0-e69d-48db-baf3-7d2ba6682898" (UID: "7fc8f4c0-e69d-48db-baf3-7d2ba6682898"). InnerVolumeSpecName "kube-api-access-ptsdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.437541 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptsdj\" (UniqueName: \"kubernetes.io/projected/7fc8f4c0-e69d-48db-baf3-7d2ba6682898-kube-api-access-ptsdj\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.832967 4704 generic.go:334] "Generic (PLEG): container finished" podID="cb9e1085-03ce-419b-be48-cee95435cc94" containerID="dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7" exitCode=0 Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.833022 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c49r4" event={"ID":"cb9e1085-03ce-419b-be48-cee95435cc94","Type":"ContainerDied","Data":"dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7"} Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.836264 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" event={"ID":"7fc8f4c0-e69d-48db-baf3-7d2ba6682898","Type":"ContainerDied","Data":"ffaf54b9134b40da1e1d3584fbf5803f00f072bc74c42ce97e2ed7a60d0f4853"} Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.836444 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0468-account-create-update-42rph" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.839081 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jl977" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.839165 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffaf54b9134b40da1e1d3584fbf5803f00f072bc74c42ce97e2ed7a60d0f4853" Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.839239 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jl977" event={"ID":"50fc2fb9-9bc4-4f20-8258-2b471827216a","Type":"ContainerDied","Data":"052018bb6c2eebe0705dc761138f371418543e24a6acf694163c3cb817dfbcce"} Jan 22 16:51:48 crc kubenswrapper[4704]: I0122 16:51:48.839319 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052018bb6c2eebe0705dc761138f371418543e24a6acf694163c3cb817dfbcce" Jan 22 16:51:49 crc kubenswrapper[4704]: I0122 16:51:49.851138 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c49r4" event={"ID":"cb9e1085-03ce-419b-be48-cee95435cc94","Type":"ContainerStarted","Data":"8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599"} Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.483267 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7"] Jan 22 16:51:50 crc kubenswrapper[4704]: E0122 16:51:50.483645 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc8f4c0-e69d-48db-baf3-7d2ba6682898" containerName="mariadb-account-create-update" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.483665 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc8f4c0-e69d-48db-baf3-7d2ba6682898" containerName="mariadb-account-create-update" Jan 22 16:51:50 crc kubenswrapper[4704]: E0122 16:51:50.483687 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fc2fb9-9bc4-4f20-8258-2b471827216a" containerName="mariadb-database-create" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.483697 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fc2fb9-9bc4-4f20-8258-2b471827216a" containerName="mariadb-database-create" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.484196 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fc2fb9-9bc4-4f20-8258-2b471827216a" containerName="mariadb-database-create" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.484221 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc8f4c0-e69d-48db-baf3-7d2ba6682898" containerName="mariadb-account-create-update" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.484959 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.488407 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zt2cq" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.491303 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.493648 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7"] Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.671013 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.671117 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-db-sync-config-data\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.671181 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-config-data\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.671231 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ldn\" (UniqueName: \"kubernetes.io/projected/8c285671-db10-4200-88df-18152de48011-kube-api-access-n6ldn\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.773172 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-config-data\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.773247 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ldn\" (UniqueName: \"kubernetes.io/projected/8c285671-db10-4200-88df-18152de48011-kube-api-access-n6ldn\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.773297 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.773347 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-db-sync-config-data\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.778504 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.780603 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-config-data\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.790437 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-db-sync-config-data\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.803546 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ldn\" (UniqueName: \"kubernetes.io/projected/8c285671-db10-4200-88df-18152de48011-kube-api-access-n6ldn\") pod \"watcher-kuttl-db-sync-nz2l7\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.805607 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.864299 4704 generic.go:334] "Generic (PLEG): container finished" podID="cb9e1085-03ce-419b-be48-cee95435cc94" containerID="8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599" exitCode=0 Jan 22 16:51:50 crc kubenswrapper[4704]: I0122 16:51:50.864354 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c49r4" event={"ID":"cb9e1085-03ce-419b-be48-cee95435cc94","Type":"ContainerDied","Data":"8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599"} Jan 22 16:51:51 crc kubenswrapper[4704]: I0122 16:51:51.289109 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7"] Jan 22 16:51:51 crc kubenswrapper[4704]: I0122 16:51:51.872996 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" event={"ID":"8c285671-db10-4200-88df-18152de48011","Type":"ContainerStarted","Data":"c674bd9ce6e99042883be47c7027625649b436f37366418df546d4047897fe51"} Jan 22 16:51:52 crc kubenswrapper[4704]: I0122 16:51:52.881984 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" event={"ID":"8c285671-db10-4200-88df-18152de48011","Type":"ContainerStarted","Data":"832888ea865d6efe665cbcccd50b683001940b8ddd1731a695ebfeee3e36ed5e"} Jan 22 16:51:52 crc kubenswrapper[4704]: I0122 16:51:52.885420 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c49r4" event={"ID":"cb9e1085-03ce-419b-be48-cee95435cc94","Type":"ContainerStarted","Data":"ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0"} Jan 22 16:51:52 crc kubenswrapper[4704]: I0122 16:51:52.909636 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" podStartSLOduration=2.909613434 podStartE2EDuration="2.909613434s" podCreationTimestamp="2026-01-22 16:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:52.898253887 +0000 UTC m=+1405.542800587" watchObservedRunningTime="2026-01-22 16:51:52.909613434 +0000 UTC m=+1405.554160144" Jan 22 16:51:52 crc kubenswrapper[4704]: I0122 16:51:52.927009 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c49r4" podStartSLOduration=3.779536594 podStartE2EDuration="6.926990633s" podCreationTimestamp="2026-01-22 16:51:46 +0000 UTC" firstStartedPulling="2026-01-22 16:51:48.834922774 +0000 UTC m=+1401.479469474" lastFinishedPulling="2026-01-22 16:51:51.982376813 +0000 UTC m=+1404.626923513" observedRunningTime="2026-01-22 16:51:52.926197971 +0000 UTC m=+1405.570744701" watchObservedRunningTime="2026-01-22 16:51:52.926990633 +0000 UTC m=+1405.571537343" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.666142 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wjvtn"] Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.668654 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.681978 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjvtn"] Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.755465 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-utilities\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.755822 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-catalog-content\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.755844 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56478\" (UniqueName: \"kubernetes.io/projected/a60eced7-3155-49b2-8989-a4ae5d2cef29-kube-api-access-56478\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.857149 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-utilities\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.857211 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-catalog-content\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.857242 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56478\" (UniqueName: \"kubernetes.io/projected/a60eced7-3155-49b2-8989-a4ae5d2cef29-kube-api-access-56478\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.857777 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-utilities\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.858331 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-catalog-content\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.880475 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56478\" (UniqueName: \"kubernetes.io/projected/a60eced7-3155-49b2-8989-a4ae5d2cef29-kube-api-access-56478\") pod \"redhat-marketplace-wjvtn\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.917638 4704 generic.go:334] "Generic (PLEG): container finished" podID="8c285671-db10-4200-88df-18152de48011" containerID="832888ea865d6efe665cbcccd50b683001940b8ddd1731a695ebfeee3e36ed5e" exitCode=0 Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.917684 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" event={"ID":"8c285671-db10-4200-88df-18152de48011","Type":"ContainerDied","Data":"832888ea865d6efe665cbcccd50b683001940b8ddd1731a695ebfeee3e36ed5e"} Jan 22 16:51:55 crc kubenswrapper[4704]: I0122 16:51:55.990188 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:51:56 crc kubenswrapper[4704]: I0122 16:51:56.480393 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjvtn"] Jan 22 16:51:56 crc kubenswrapper[4704]: W0122 16:51:56.492042 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda60eced7_3155_49b2_8989_a4ae5d2cef29.slice/crio-611dc9dac038ce24b2214b84d825837ca38a3801b614151c55ac9ce6517e55f6 WatchSource:0}: Error finding container 611dc9dac038ce24b2214b84d825837ca38a3801b614151c55ac9ce6517e55f6: Status 404 returned error can't find the container with id 611dc9dac038ce24b2214b84d825837ca38a3801b614151c55ac9ce6517e55f6 Jan 22 16:51:56 crc kubenswrapper[4704]: I0122 16:51:56.926995 4704 generic.go:334] "Generic (PLEG): container finished" podID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerID="72fab189c6315bf57f11e4ea0b185fb3d1ac6411d1d1e651ec24d53491d8854d" exitCode=0 Jan 22 16:51:56 crc kubenswrapper[4704]: I0122 16:51:56.927079 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjvtn" event={"ID":"a60eced7-3155-49b2-8989-a4ae5d2cef29","Type":"ContainerDied","Data":"72fab189c6315bf57f11e4ea0b185fb3d1ac6411d1d1e651ec24d53491d8854d"} Jan 22 16:51:56 crc kubenswrapper[4704]: I0122 16:51:56.927151 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjvtn" event={"ID":"a60eced7-3155-49b2-8989-a4ae5d2cef29","Type":"ContainerStarted","Data":"611dc9dac038ce24b2214b84d825837ca38a3801b614151c55ac9ce6517e55f6"} Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.189805 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.189861 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.295983 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.382468 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-combined-ca-bundle\") pod \"8c285671-db10-4200-88df-18152de48011\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.382637 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-db-sync-config-data\") pod \"8c285671-db10-4200-88df-18152de48011\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.382730 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ldn\" (UniqueName: \"kubernetes.io/projected/8c285671-db10-4200-88df-18152de48011-kube-api-access-n6ldn\") pod \"8c285671-db10-4200-88df-18152de48011\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.383003 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-config-data\") pod \"8c285671-db10-4200-88df-18152de48011\" (UID: \"8c285671-db10-4200-88df-18152de48011\") " Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.404109 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c285671-db10-4200-88df-18152de48011-kube-api-access-n6ldn" (OuterVolumeSpecName: "kube-api-access-n6ldn") pod "8c285671-db10-4200-88df-18152de48011" (UID: "8c285671-db10-4200-88df-18152de48011"). InnerVolumeSpecName "kube-api-access-n6ldn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.405548 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8c285671-db10-4200-88df-18152de48011" (UID: "8c285671-db10-4200-88df-18152de48011"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.424716 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c285671-db10-4200-88df-18152de48011" (UID: "8c285671-db10-4200-88df-18152de48011"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.426035 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-config-data" (OuterVolumeSpecName: "config-data") pod "8c285671-db10-4200-88df-18152de48011" (UID: "8c285671-db10-4200-88df-18152de48011"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.485399 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.485446 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.485460 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c285671-db10-4200-88df-18152de48011-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.485472 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ldn\" (UniqueName: \"kubernetes.io/projected/8c285671-db10-4200-88df-18152de48011-kube-api-access-n6ldn\") on node \"crc\" DevicePath \"\"" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.938567 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" event={"ID":"8c285671-db10-4200-88df-18152de48011","Type":"ContainerDied","Data":"c674bd9ce6e99042883be47c7027625649b436f37366418df546d4047897fe51"} Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.938615 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c674bd9ce6e99042883be47c7027625649b436f37366418df546d4047897fe51" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.938585 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7" Jan 22 16:51:57 crc kubenswrapper[4704]: I0122 16:51:57.941095 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjvtn" event={"ID":"a60eced7-3155-49b2-8989-a4ae5d2cef29","Type":"ContainerStarted","Data":"f3e3f9c9f0a14a059024face4e1007eb473e65490f4c508b667ef5aa753fa92d"} Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.237026 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:58 crc kubenswrapper[4704]: E0122 16:51:58.237522 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c285671-db10-4200-88df-18152de48011" containerName="watcher-kuttl-db-sync" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.237548 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c285671-db10-4200-88df-18152de48011" containerName="watcher-kuttl-db-sync" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.237917 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c285671-db10-4200-88df-18152de48011" containerName="watcher-kuttl-db-sync" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.239577 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.243404 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c49r4" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="registry-server" probeResult="failure" output=< Jan 22 16:51:58 crc kubenswrapper[4704]: timeout: failed to connect service ":50051" within 1s Jan 22 16:51:58 crc kubenswrapper[4704]: > Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.243444 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zt2cq" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.243952 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.244492 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.246248 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.269306 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.289151 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.290122 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.294190 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.339977 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.366643 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.368427 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.370420 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.377995 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.430948 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431032 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431071 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431116 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40c15c7-3441-487f-8527-04c3dc9fdac3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431146 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431180 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5h7k\" (UniqueName: \"kubernetes.io/projected/a40c15c7-3441-487f-8527-04c3dc9fdac3-kube-api-access-g5h7k\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431211 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431244 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r72d8\" (UniqueName: \"kubernetes.io/projected/98a1a943-bfec-424c-b3c5-424afee63f63-kube-api-access-r72d8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431281 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431317 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431342 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98a1a943-bfec-424c-b3c5-424afee63f63-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.431396 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533588 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533678 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533713 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533743 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533780 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533824 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40c15c7-3441-487f-8527-04c3dc9fdac3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533846 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533868 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnpx\" (UniqueName: \"kubernetes.io/projected/7c432492-59d5-4a17-b5ee-698cf6dc32ac-kube-api-access-dqnpx\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.533901 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5h7k\" (UniqueName: \"kubernetes.io/projected/a40c15c7-3441-487f-8527-04c3dc9fdac3-kube-api-access-g5h7k\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.534441 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.534480 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r72d8\" (UniqueName: \"kubernetes.io/projected/98a1a943-bfec-424c-b3c5-424afee63f63-kube-api-access-r72d8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.534517 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.534540 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.534564 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.534582 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98a1a943-bfec-424c-b3c5-424afee63f63-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.534611 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c432492-59d5-4a17-b5ee-698cf6dc32ac-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.536416 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98a1a943-bfec-424c-b3c5-424afee63f63-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.536434 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40c15c7-3441-487f-8527-04c3dc9fdac3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.539616 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.539664 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.540930 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.541514 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.550834 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.558442 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.560566 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.561585 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5h7k\" (UniqueName: \"kubernetes.io/projected/a40c15c7-3441-487f-8527-04c3dc9fdac3-kube-api-access-g5h7k\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.561660 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r72d8\" (UniqueName: \"kubernetes.io/projected/98a1a943-bfec-424c-b3c5-424afee63f63-kube-api-access-r72d8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.561919 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.581631 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.635584 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.635634 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c432492-59d5-4a17-b5ee-698cf6dc32ac-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.635703 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.635730 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnpx\" (UniqueName: \"kubernetes.io/projected/7c432492-59d5-4a17-b5ee-698cf6dc32ac-kube-api-access-dqnpx\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.636295 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c432492-59d5-4a17-b5ee-698cf6dc32ac-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.639611 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.640560 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.653397 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnpx\" (UniqueName: \"kubernetes.io/projected/7c432492-59d5-4a17-b5ee-698cf6dc32ac-kube-api-access-dqnpx\") pod \"watcher-kuttl-applier-0\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.658690 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.691874 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.965436 4704 generic.go:334] "Generic (PLEG): container finished" podID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerID="f3e3f9c9f0a14a059024face4e1007eb473e65490f4c508b667ef5aa753fa92d" exitCode=0 Jan 22 16:51:58 crc kubenswrapper[4704]: I0122 16:51:58.966622 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjvtn" event={"ID":"a60eced7-3155-49b2-8989-a4ae5d2cef29","Type":"ContainerDied","Data":"f3e3f9c9f0a14a059024face4e1007eb473e65490f4c508b667ef5aa753fa92d"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.091855 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:51:59 crc kubenswrapper[4704]: W0122 16:51:59.092130 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda40c15c7_3441_487f_8527_04c3dc9fdac3.slice/crio-ed226bdab822f3fc1f35e936eada73429ff1e83febdf11f627657a880703c18d WatchSource:0}: Error finding container ed226bdab822f3fc1f35e936eada73429ff1e83febdf11f627657a880703c18d: Status 404 returned error can't find the container with id ed226bdab822f3fc1f35e936eada73429ff1e83febdf11f627657a880703c18d Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.263371 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.270836 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:51:59 crc kubenswrapper[4704]: W0122 16:51:59.280445 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c432492_59d5_4a17_b5ee_698cf6dc32ac.slice/crio-5dadf05d35f281200cd7db00317184c446c923eb1e8faa63fa2a0f2c07111d72 WatchSource:0}: Error finding container 5dadf05d35f281200cd7db00317184c446c923eb1e8faa63fa2a0f2c07111d72: Status 404 returned error can't find the container with id 5dadf05d35f281200cd7db00317184c446c923eb1e8faa63fa2a0f2c07111d72 Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.976472 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a40c15c7-3441-487f-8527-04c3dc9fdac3","Type":"ContainerStarted","Data":"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.976905 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a40c15c7-3441-487f-8527-04c3dc9fdac3","Type":"ContainerStarted","Data":"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.976930 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.976944 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a40c15c7-3441-487f-8527-04c3dc9fdac3","Type":"ContainerStarted","Data":"ed226bdab822f3fc1f35e936eada73429ff1e83febdf11f627657a880703c18d"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.978599 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjvtn" event={"ID":"a60eced7-3155-49b2-8989-a4ae5d2cef29","Type":"ContainerStarted","Data":"f44cd45fba6ff18efbe2e7477b328987cf34a35f0d84c0e109037a9fc62ea31f"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.980148 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7c432492-59d5-4a17-b5ee-698cf6dc32ac","Type":"ContainerStarted","Data":"2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.980175 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7c432492-59d5-4a17-b5ee-698cf6dc32ac","Type":"ContainerStarted","Data":"5dadf05d35f281200cd7db00317184c446c923eb1e8faa63fa2a0f2c07111d72"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.981419 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"98a1a943-bfec-424c-b3c5-424afee63f63","Type":"ContainerStarted","Data":"72d8ecab972575ac425308b65ee55f9f77ae9838ea331957c66459d8ba740734"} Jan 22 16:51:59 crc kubenswrapper[4704]: I0122 16:51:59.981450 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"98a1a943-bfec-424c-b3c5-424afee63f63","Type":"ContainerStarted","Data":"55c80d15688c39dde17178cce428e69edbb22f31183d1769d190fa7cf08fbb51"} Jan 22 16:52:00 crc kubenswrapper[4704]: I0122 16:52:00.002899 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.002879532 podStartE2EDuration="2.002879532s" podCreationTimestamp="2026-01-22 16:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:51:59.999002141 +0000 UTC m=+1412.643548831" watchObservedRunningTime="2026-01-22 16:52:00.002879532 +0000 UTC m=+1412.647426232" Jan 22 16:52:00 crc kubenswrapper[4704]: I0122 16:52:00.019053 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wjvtn" podStartSLOduration=2.5227336449999997 podStartE2EDuration="5.019033587s" podCreationTimestamp="2026-01-22 16:51:55 +0000 UTC" firstStartedPulling="2026-01-22 16:51:56.928510167 +0000 UTC m=+1409.573056887" lastFinishedPulling="2026-01-22 16:51:59.424810119 +0000 UTC m=+1412.069356829" observedRunningTime="2026-01-22 16:52:00.016249357 +0000 UTC m=+1412.660796077" watchObservedRunningTime="2026-01-22 16:52:00.019033587 +0000 UTC m=+1412.663580287" Jan 22 16:52:00 crc kubenswrapper[4704]: I0122 16:52:00.047809 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.047774184 podStartE2EDuration="2.047774184s" podCreationTimestamp="2026-01-22 16:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:00.039172147 +0000 UTC m=+1412.683718847" watchObservedRunningTime="2026-01-22 16:52:00.047774184 +0000 UTC m=+1412.692320884" Jan 22 16:52:00 crc kubenswrapper[4704]: I0122 16:52:00.060568 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.060550842 podStartE2EDuration="2.060550842s" podCreationTimestamp="2026-01-22 16:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:00.056430083 +0000 UTC m=+1412.700976783" watchObservedRunningTime="2026-01-22 16:52:00.060550842 +0000 UTC m=+1412.705097542" Jan 22 16:52:02 crc kubenswrapper[4704]: I0122 16:52:02.301173 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:03 crc kubenswrapper[4704]: I0122 16:52:03.581882 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:03 crc kubenswrapper[4704]: I0122 16:52:03.692234 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:05 crc kubenswrapper[4704]: I0122 16:52:05.990978 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:52:05 crc kubenswrapper[4704]: I0122 16:52:05.991031 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:52:06 crc kubenswrapper[4704]: I0122 16:52:06.038685 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:52:06 crc kubenswrapper[4704]: I0122 16:52:06.092036 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:52:07 crc kubenswrapper[4704]: I0122 16:52:07.119334 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:07 crc kubenswrapper[4704]: I0122 16:52:07.262222 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:52:07 crc kubenswrapper[4704]: I0122 16:52:07.316023 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:52:08 crc kubenswrapper[4704]: I0122 16:52:08.582812 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:08 crc kubenswrapper[4704]: I0122 16:52:08.592767 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:08 crc kubenswrapper[4704]: I0122 16:52:08.659721 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:08 crc kubenswrapper[4704]: I0122 16:52:08.683452 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:08 crc kubenswrapper[4704]: I0122 16:52:08.692905 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:08 crc kubenswrapper[4704]: I0122 16:52:08.729527 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:09 crc kubenswrapper[4704]: I0122 16:52:09.068040 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:09 crc kubenswrapper[4704]: I0122 16:52:09.074493 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:09 crc kubenswrapper[4704]: I0122 16:52:09.099325 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:09 crc kubenswrapper[4704]: I0122 16:52:09.118838 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:10 crc kubenswrapper[4704]: I0122 16:52:10.443498 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:10 crc kubenswrapper[4704]: I0122 16:52:10.444156 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-central-agent" containerID="cri-o://fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88" gracePeriod=30 Jan 22 16:52:10 crc kubenswrapper[4704]: I0122 16:52:10.444215 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="proxy-httpd" containerID="cri-o://340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366" gracePeriod=30 Jan 22 16:52:10 crc kubenswrapper[4704]: I0122 16:52:10.444215 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="sg-core" containerID="cri-o://2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c" gracePeriod=30 Jan 22 16:52:10 crc kubenswrapper[4704]: I0122 16:52:10.444282 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-notification-agent" containerID="cri-o://4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8" gracePeriod=30 Jan 22 16:52:10 crc kubenswrapper[4704]: I0122 16:52:10.655485 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjvtn"] Jan 22 16:52:10 crc kubenswrapper[4704]: I0122 16:52:10.655743 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wjvtn" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="registry-server" containerID="cri-o://f44cd45fba6ff18efbe2e7477b328987cf34a35f0d84c0e109037a9fc62ea31f" gracePeriod=2 Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.107161 4704 generic.go:334] "Generic (PLEG): container finished" podID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerID="f44cd45fba6ff18efbe2e7477b328987cf34a35f0d84c0e109037a9fc62ea31f" exitCode=0 Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.107224 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjvtn" event={"ID":"a60eced7-3155-49b2-8989-a4ae5d2cef29","Type":"ContainerDied","Data":"f44cd45fba6ff18efbe2e7477b328987cf34a35f0d84c0e109037a9fc62ea31f"} Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.112104 4704 generic.go:334] "Generic (PLEG): container finished" podID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerID="340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366" exitCode=0 Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.112142 4704 generic.go:334] "Generic (PLEG): container finished" podID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerID="2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c" exitCode=2 Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.112149 4704 generic.go:334] "Generic (PLEG): container finished" podID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerID="fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88" exitCode=0 Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.112171 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerDied","Data":"340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366"} Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.112220 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerDied","Data":"2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c"} Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.112234 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerDied","Data":"fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88"} Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.194743 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.283074 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-utilities\") pod \"a60eced7-3155-49b2-8989-a4ae5d2cef29\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.283184 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56478\" (UniqueName: \"kubernetes.io/projected/a60eced7-3155-49b2-8989-a4ae5d2cef29-kube-api-access-56478\") pod \"a60eced7-3155-49b2-8989-a4ae5d2cef29\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.283312 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-catalog-content\") pod \"a60eced7-3155-49b2-8989-a4ae5d2cef29\" (UID: \"a60eced7-3155-49b2-8989-a4ae5d2cef29\") " Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.285506 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-utilities" (OuterVolumeSpecName: "utilities") pod "a60eced7-3155-49b2-8989-a4ae5d2cef29" (UID: "a60eced7-3155-49b2-8989-a4ae5d2cef29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.296846 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60eced7-3155-49b2-8989-a4ae5d2cef29-kube-api-access-56478" (OuterVolumeSpecName: "kube-api-access-56478") pod "a60eced7-3155-49b2-8989-a4ae5d2cef29" (UID: "a60eced7-3155-49b2-8989-a4ae5d2cef29"). InnerVolumeSpecName "kube-api-access-56478". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.315283 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a60eced7-3155-49b2-8989-a4ae5d2cef29" (UID: "a60eced7-3155-49b2-8989-a4ae5d2cef29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.386050 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.386308 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56478\" (UniqueName: \"kubernetes.io/projected/a60eced7-3155-49b2-8989-a4ae5d2cef29-kube-api-access-56478\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:11 crc kubenswrapper[4704]: I0122 16:52:11.386320 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60eced7-3155-49b2-8989-a4ae5d2cef29-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.057122 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c49r4"] Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.057405 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c49r4" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="registry-server" containerID="cri-o://ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0" gracePeriod=2 Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.120990 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjvtn" event={"ID":"a60eced7-3155-49b2-8989-a4ae5d2cef29","Type":"ContainerDied","Data":"611dc9dac038ce24b2214b84d825837ca38a3801b614151c55ac9ce6517e55f6"} Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.121038 4704 scope.go:117] "RemoveContainer" containerID="f44cd45fba6ff18efbe2e7477b328987cf34a35f0d84c0e109037a9fc62ea31f" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.121160 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjvtn" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.170208 4704 scope.go:117] "RemoveContainer" containerID="f3e3f9c9f0a14a059024face4e1007eb473e65490f4c508b667ef5aa753fa92d" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.190940 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjvtn"] Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.198155 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjvtn"] Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.217097 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.217309 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-kuttl-api-log" containerID="cri-o://73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9" gracePeriod=30 Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.217646 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-api" containerID="cri-o://9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1" gracePeriod=30 Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.266999 4704 scope.go:117] "RemoveContainer" containerID="72fab189c6315bf57f11e4ea0b185fb3d1ac6411d1d1e651ec24d53491d8854d" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.488544 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.605084 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-utilities\") pod \"cb9e1085-03ce-419b-be48-cee95435cc94\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.605416 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpkq4\" (UniqueName: \"kubernetes.io/projected/cb9e1085-03ce-419b-be48-cee95435cc94-kube-api-access-qpkq4\") pod \"cb9e1085-03ce-419b-be48-cee95435cc94\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.605480 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-catalog-content\") pod \"cb9e1085-03ce-419b-be48-cee95435cc94\" (UID: \"cb9e1085-03ce-419b-be48-cee95435cc94\") " Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.605729 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-utilities" (OuterVolumeSpecName: "utilities") pod "cb9e1085-03ce-419b-be48-cee95435cc94" (UID: "cb9e1085-03ce-419b-be48-cee95435cc94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.606090 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.611054 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9e1085-03ce-419b-be48-cee95435cc94-kube-api-access-qpkq4" (OuterVolumeSpecName: "kube-api-access-qpkq4") pod "cb9e1085-03ce-419b-be48-cee95435cc94" (UID: "cb9e1085-03ce-419b-be48-cee95435cc94"). InnerVolumeSpecName "kube-api-access-qpkq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.707619 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpkq4\" (UniqueName: \"kubernetes.io/projected/cb9e1085-03ce-419b-be48-cee95435cc94-kube-api-access-qpkq4\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.712985 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb9e1085-03ce-419b-be48-cee95435cc94" (UID: "cb9e1085-03ce-419b-be48-cee95435cc94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:12 crc kubenswrapper[4704]: I0122 16:52:12.809243 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb9e1085-03ce-419b-be48-cee95435cc94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.021751 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.114647 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-config-data\") pod \"a40c15c7-3441-487f-8527-04c3dc9fdac3\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115020 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-custom-prometheus-ca\") pod \"a40c15c7-3441-487f-8527-04c3dc9fdac3\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115043 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40c15c7-3441-487f-8527-04c3dc9fdac3-logs\") pod \"a40c15c7-3441-487f-8527-04c3dc9fdac3\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115082 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5h7k\" (UniqueName: \"kubernetes.io/projected/a40c15c7-3441-487f-8527-04c3dc9fdac3-kube-api-access-g5h7k\") pod \"a40c15c7-3441-487f-8527-04c3dc9fdac3\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115160 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-public-tls-certs\") pod \"a40c15c7-3441-487f-8527-04c3dc9fdac3\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115505 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-combined-ca-bundle\") pod \"a40c15c7-3441-487f-8527-04c3dc9fdac3\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115503 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40c15c7-3441-487f-8527-04c3dc9fdac3-logs" (OuterVolumeSpecName: "logs") pod "a40c15c7-3441-487f-8527-04c3dc9fdac3" (UID: "a40c15c7-3441-487f-8527-04c3dc9fdac3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115544 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-internal-tls-certs\") pod \"a40c15c7-3441-487f-8527-04c3dc9fdac3\" (UID: \"a40c15c7-3441-487f-8527-04c3dc9fdac3\") " Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.115851 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40c15c7-3441-487f-8527-04c3dc9fdac3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.119401 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40c15c7-3441-487f-8527-04c3dc9fdac3-kube-api-access-g5h7k" (OuterVolumeSpecName: "kube-api-access-g5h7k") pod "a40c15c7-3441-487f-8527-04c3dc9fdac3" (UID: "a40c15c7-3441-487f-8527-04c3dc9fdac3"). InnerVolumeSpecName "kube-api-access-g5h7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.133200 4704 generic.go:334] "Generic (PLEG): container finished" podID="cb9e1085-03ce-419b-be48-cee95435cc94" containerID="ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0" exitCode=0 Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.133254 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c49r4" event={"ID":"cb9e1085-03ce-419b-be48-cee95435cc94","Type":"ContainerDied","Data":"ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0"} Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.133282 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c49r4" event={"ID":"cb9e1085-03ce-419b-be48-cee95435cc94","Type":"ContainerDied","Data":"43088049ad8c8ad1ec69eb877c978c027dff99644390c823983d06792ff71dcf"} Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.133299 4704 scope.go:117] "RemoveContainer" containerID="ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.133461 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c49r4" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.136811 4704 generic.go:334] "Generic (PLEG): container finished" podID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerID="9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1" exitCode=0 Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.136918 4704 generic.go:334] "Generic (PLEG): container finished" podID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerID="73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9" exitCode=143 Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.136941 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.136957 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a40c15c7-3441-487f-8527-04c3dc9fdac3","Type":"ContainerDied","Data":"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1"} Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.137536 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a40c15c7-3441-487f-8527-04c3dc9fdac3","Type":"ContainerDied","Data":"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9"} Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.137609 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"a40c15c7-3441-487f-8527-04c3dc9fdac3","Type":"ContainerDied","Data":"ed226bdab822f3fc1f35e936eada73429ff1e83febdf11f627657a880703c18d"} Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.142170 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a40c15c7-3441-487f-8527-04c3dc9fdac3" (UID: "a40c15c7-3441-487f-8527-04c3dc9fdac3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.150943 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a40c15c7-3441-487f-8527-04c3dc9fdac3" (UID: "a40c15c7-3441-487f-8527-04c3dc9fdac3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.159236 4704 scope.go:117] "RemoveContainer" containerID="8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.170440 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a40c15c7-3441-487f-8527-04c3dc9fdac3" (UID: "a40c15c7-3441-487f-8527-04c3dc9fdac3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.178265 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c49r4"] Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.187909 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c49r4"] Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.194460 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a40c15c7-3441-487f-8527-04c3dc9fdac3" (UID: "a40c15c7-3441-487f-8527-04c3dc9fdac3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.197103 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-config-data" (OuterVolumeSpecName: "config-data") pod "a40c15c7-3441-487f-8527-04c3dc9fdac3" (UID: "a40c15c7-3441-487f-8527-04c3dc9fdac3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.208213 4704 scope.go:117] "RemoveContainer" containerID="dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.217812 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.217842 4704 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.217852 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.217860 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.217870 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5h7k\" (UniqueName: \"kubernetes.io/projected/a40c15c7-3441-487f-8527-04c3dc9fdac3-kube-api-access-g5h7k\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.217877 4704 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40c15c7-3441-487f-8527-04c3dc9fdac3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.226200 4704 scope.go:117] "RemoveContainer" containerID="ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.226643 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0\": container with ID starting with ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0 not found: ID does not exist" containerID="ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.226681 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0"} err="failed to get container status \"ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0\": rpc error: code = NotFound desc = could not find container \"ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0\": container with ID starting with ecec0f03c7aec0bf4dc7dde18fb4014950d7ed20ae5f61327cefa6f33f6e36c0 not found: ID does not exist" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.226707 4704 scope.go:117] "RemoveContainer" containerID="8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.227139 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599\": container with ID starting with 8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599 not found: ID does not exist" containerID="8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.227213 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599"} err="failed to get container status \"8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599\": rpc error: code = NotFound desc = could not find container \"8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599\": container with ID starting with 8742e63a160828135c4cdf4522a99e6804d557fea473e3e00c9b6b47b01e7599 not found: ID does not exist" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.227282 4704 scope.go:117] "RemoveContainer" containerID="dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.227596 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7\": container with ID starting with dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7 not found: ID does not exist" containerID="dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.227650 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7"} err="failed to get container status \"dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7\": rpc error: code = NotFound desc = could not find container \"dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7\": container with ID starting with dcb82515db5ebf7dd39995c115f91c35d5071edd27d947f0489832f32996eae7 not found: ID does not exist" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.227671 4704 scope.go:117] "RemoveContainer" containerID="9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.244357 4704 scope.go:117] "RemoveContainer" containerID="73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.264509 4704 scope.go:117] "RemoveContainer" containerID="9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.265126 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1\": container with ID starting with 9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1 not found: ID does not exist" containerID="9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.265157 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1"} err="failed to get container status \"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1\": rpc error: code = NotFound desc = could not find container \"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1\": container with ID starting with 9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1 not found: ID does not exist" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.265181 4704 scope.go:117] "RemoveContainer" containerID="73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.265452 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9\": container with ID starting with 73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9 not found: ID does not exist" containerID="73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.265491 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9"} err="failed to get container status \"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9\": rpc error: code = NotFound desc = could not find container \"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9\": container with ID starting with 73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9 not found: ID does not exist" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.265518 4704 scope.go:117] "RemoveContainer" containerID="9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.265836 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1"} err="failed to get container status \"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1\": rpc error: code = NotFound desc = could not find container \"9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1\": container with ID starting with 9d3d1362f1a0f5873856dde58df0d61753d155eaeafa855f81639251644a01b1 not found: ID does not exist" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.265881 4704 scope.go:117] "RemoveContainer" containerID="73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.266243 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9"} err="failed to get container status \"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9\": rpc error: code = NotFound desc = could not find container \"73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9\": container with ID starting with 73ed4f39448041f9609a21d0287b20753188abd552dfd2a5c7d8a74f745050e9 not found: ID does not exist" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.492859 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.506046 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.526668 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.527541 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="extract-utilities" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.527683 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="extract-utilities" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.527748 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-api" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.527930 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-api" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.528004 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="extract-content" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528055 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="extract-content" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.528107 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="registry-server" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528157 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="registry-server" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.528206 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-kuttl-api-log" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528260 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-kuttl-api-log" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.528328 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="registry-server" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528381 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="registry-server" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.528432 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="extract-content" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528479 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="extract-content" Jan 22 16:52:13 crc kubenswrapper[4704]: E0122 16:52:13.528527 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="extract-utilities" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528573 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="extract-utilities" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528879 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" containerName="registry-server" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.528967 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-api" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.529021 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" containerName="watcher-kuttl-api-log" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.529082 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" containerName="registry-server" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.530269 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.532606 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.533521 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.537816 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.544565 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.623688 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.623740 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.623782 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.623835 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f21df6-be2e-4b70-99f7-0dca3af15451-logs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.624048 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.624125 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.624290 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7877m\" (UniqueName: \"kubernetes.io/projected/07f21df6-be2e-4b70-99f7-0dca3af15451-kube-api-access-7877m\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.643777 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40c15c7-3441-487f-8527-04c3dc9fdac3" path="/var/lib/kubelet/pods/a40c15c7-3441-487f-8527-04c3dc9fdac3/volumes" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.644672 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60eced7-3155-49b2-8989-a4ae5d2cef29" path="/var/lib/kubelet/pods/a60eced7-3155-49b2-8989-a4ae5d2cef29/volumes" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.645741 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9e1085-03ce-419b-be48-cee95435cc94" path="/var/lib/kubelet/pods/cb9e1085-03ce-419b-be48-cee95435cc94/volumes" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.726233 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.726640 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.726864 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7877m\" (UniqueName: \"kubernetes.io/projected/07f21df6-be2e-4b70-99f7-0dca3af15451-kube-api-access-7877m\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.727033 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.727184 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.727313 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.727489 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f21df6-be2e-4b70-99f7-0dca3af15451-logs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.727952 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f21df6-be2e-4b70-99f7-0dca3af15451-logs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.730466 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.730587 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.730892 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.733521 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.734044 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.749574 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7877m\" (UniqueName: \"kubernetes.io/projected/07f21df6-be2e-4b70-99f7-0dca3af15451-kube-api-access-7877m\") pod \"watcher-kuttl-api-0\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:13 crc kubenswrapper[4704]: I0122 16:52:13.853200 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:14 crc kubenswrapper[4704]: I0122 16:52:14.310148 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:14 crc kubenswrapper[4704]: W0122 16:52:14.316874 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f21df6_be2e_4b70_99f7_0dca3af15451.slice/crio-cb1ae381b47e5f7dd0596b513cac7841fbee9ea4b6898d324224a24ad9a87d87 WatchSource:0}: Error finding container cb1ae381b47e5f7dd0596b513cac7841fbee9ea4b6898d324224a24ad9a87d87: Status 404 returned error can't find the container with id cb1ae381b47e5f7dd0596b513cac7841fbee9ea4b6898d324224a24ad9a87d87 Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.158257 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"07f21df6-be2e-4b70-99f7-0dca3af15451","Type":"ContainerStarted","Data":"f2f57dab3d5022f36d3a1fe98ffb265c0a42cd76c1ec19093546deffb12841e2"} Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.158837 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"07f21df6-be2e-4b70-99f7-0dca3af15451","Type":"ContainerStarted","Data":"c619f19b0ce81a78a880949e8be9c915874956fe4533c0b1b68e75ef194e1ed5"} Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.158851 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"07f21df6-be2e-4b70-99f7-0dca3af15451","Type":"ContainerStarted","Data":"cb1ae381b47e5f7dd0596b513cac7841fbee9ea4b6898d324224a24ad9a87d87"} Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.159204 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.183180 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.183162879 podStartE2EDuration="2.183162879s" podCreationTimestamp="2026-01-22 16:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:15.178837206 +0000 UTC m=+1427.823383906" watchObservedRunningTime="2026-01-22 16:52:15.183162879 +0000 UTC m=+1427.827709579" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.709763 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.823890 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7"] Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.835955 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-nz2l7"] Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.871936 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-sg-core-conf-yaml\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.871981 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqh8\" (UniqueName: \"kubernetes.io/projected/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-kube-api-access-mnqh8\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.872028 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-log-httpd\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.872052 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-combined-ca-bundle\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.872080 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-ceilometer-tls-certs\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.872160 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-config-data\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.872180 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-scripts\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.872237 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-run-httpd\") pod \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\" (UID: \"fc8d7d10-d30b-4622-a446-99a2f2de9ddb\") " Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.873177 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.873535 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.876078 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.876273 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="98a1a943-bfec-424c-b3c5-424afee63f63" containerName="watcher-decision-engine" containerID="cri-o://72d8ecab972575ac425308b65ee55f9f77ae9838ea331957c66459d8ba740734" gracePeriod=30 Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.881496 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-kube-api-access-mnqh8" (OuterVolumeSpecName: "kube-api-access-mnqh8") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "kube-api-access-mnqh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.886927 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-scripts" (OuterVolumeSpecName: "scripts") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.907654 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.941806 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.974258 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.974399 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqh8\" (UniqueName: \"kubernetes.io/projected/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-kube-api-access-mnqh8\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.974477 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.974538 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.974592 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.974679 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.982198 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher0468-account-delete-z5rw6"] Jan 22 16:52:15 crc kubenswrapper[4704]: E0122 16:52:15.982757 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-central-agent" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.982834 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-central-agent" Jan 22 16:52:15 crc kubenswrapper[4704]: E0122 16:52:15.982923 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-notification-agent" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.982974 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-notification-agent" Jan 22 16:52:15 crc kubenswrapper[4704]: E0122 16:52:15.983063 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="sg-core" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.983118 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="sg-core" Jan 22 16:52:15 crc kubenswrapper[4704]: E0122 16:52:15.983187 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="proxy-httpd" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.983240 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="proxy-httpd" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.983433 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-central-agent" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.983509 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="ceilometer-notification-agent" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.983579 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="proxy-httpd" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.983634 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerName="sg-core" Jan 22 16:52:15 crc kubenswrapper[4704]: I0122 16:52:15.984237 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.003502 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.009894 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0468-account-delete-z5rw6"] Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.033845 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.034182 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-config-data" (OuterVolumeSpecName: "config-data") pod "fc8d7d10-d30b-4622-a446-99a2f2de9ddb" (UID: "fc8d7d10-d30b-4622-a446-99a2f2de9ddb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.045831 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.046039 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="7c432492-59d5-4a17-b5ee-698cf6dc32ac" containerName="watcher-applier" containerID="cri-o://2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" gracePeriod=30 Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.076621 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06378b9b-335c-44d7-9cb1-a53392766bf5-operator-scripts\") pod \"watcher0468-account-delete-z5rw6\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.076850 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48rrx\" (UniqueName: \"kubernetes.io/projected/06378b9b-335c-44d7-9cb1-a53392766bf5-kube-api-access-48rrx\") pod \"watcher0468-account-delete-z5rw6\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.076932 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.076949 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8d7d10-d30b-4622-a446-99a2f2de9ddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.172086 4704 generic.go:334] "Generic (PLEG): container finished" podID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" containerID="4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8" exitCode=0 Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.172205 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.172250 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerDied","Data":"4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8"} Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.172276 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fc8d7d10-d30b-4622-a446-99a2f2de9ddb","Type":"ContainerDied","Data":"8d0cddb92a0c66e0d5c3ce2801200db325a53c60b2baa73973a3545778da88a2"} Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.172294 4704 scope.go:117] "RemoveContainer" containerID="340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.172611 4704 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-api-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-zt2cq\" not found" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.178750 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48rrx\" (UniqueName: \"kubernetes.io/projected/06378b9b-335c-44d7-9cb1-a53392766bf5-kube-api-access-48rrx\") pod \"watcher0468-account-delete-z5rw6\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.178838 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06378b9b-335c-44d7-9cb1-a53392766bf5-operator-scripts\") pod \"watcher0468-account-delete-z5rw6\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.179701 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06378b9b-335c-44d7-9cb1-a53392766bf5-operator-scripts\") pod \"watcher0468-account-delete-z5rw6\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.207016 4704 scope.go:117] "RemoveContainer" containerID="2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.207751 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48rrx\" (UniqueName: \"kubernetes.io/projected/06378b9b-335c-44d7-9cb1-a53392766bf5-kube-api-access-48rrx\") pod \"watcher0468-account-delete-z5rw6\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.224435 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.226088 4704 scope.go:117] "RemoveContainer" containerID="4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.253760 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.254467 4704 scope.go:117] "RemoveContainer" containerID="fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.262870 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.264885 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.268081 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.272148 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.273004 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.273516 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.280420 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.280485 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data podName:07f21df6-be2e-4b70-99f7-0dca3af15451 nodeName:}" failed. No retries permitted until 2026-01-22 16:52:16.780469696 +0000 UTC m=+1429.425016396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data") pod "watcher-kuttl-api-0" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451") : secret "watcher-kuttl-api-config-data" not found Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.286210 4704 scope.go:117] "RemoveContainer" containerID="340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366" Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.286760 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366\": container with ID starting with 340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366 not found: ID does not exist" containerID="340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.286818 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366"} err="failed to get container status \"340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366\": rpc error: code = NotFound desc = could not find container \"340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366\": container with ID starting with 340e53511e225cb6936dd9c08be92a9cb973ab1f906693697972b15dbf4eb366 not found: ID does not exist" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.286845 4704 scope.go:117] "RemoveContainer" containerID="2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c" Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.301224 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c\": container with ID starting with 2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c not found: ID does not exist" containerID="2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.301283 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c"} err="failed to get container status \"2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c\": rpc error: code = NotFound desc = could not find container \"2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c\": container with ID starting with 2cfc287573c78d1b830130c273e1129d7ef7a042c75e250dfe642532553ee25c not found: ID does not exist" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.301313 4704 scope.go:117] "RemoveContainer" containerID="4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8" Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.304419 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8\": container with ID starting with 4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8 not found: ID does not exist" containerID="4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.304463 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8"} err="failed to get container status \"4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8\": rpc error: code = NotFound desc = could not find container \"4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8\": container with ID starting with 4d61e554cbd7186f081bb0836dae50730e7b2104fa151a53b7c82014b34640c8 not found: ID does not exist" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.304496 4704 scope.go:117] "RemoveContainer" containerID="fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.306555 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.314239 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88\": container with ID starting with fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88 not found: ID does not exist" containerID="fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.314409 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88"} err="failed to get container status \"fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88\": rpc error: code = NotFound desc = could not find container \"fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88\": container with ID starting with fb47a151d66fe801345e5952028ebef8d2337a6bbee8b4749e9036051e624d88 not found: ID does not exist" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382716 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-log-httpd\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382813 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vr8\" (UniqueName: \"kubernetes.io/projected/e9722f27-89ae-485b-ace7-3af2257bd5c5-kube-api-access-m5vr8\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382839 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-scripts\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382883 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-run-httpd\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382912 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-config-data\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382937 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382957 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.382981 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484157 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-log-httpd\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484514 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vr8\" (UniqueName: \"kubernetes.io/projected/e9722f27-89ae-485b-ace7-3af2257bd5c5-kube-api-access-m5vr8\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484542 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-scripts\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484575 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-run-httpd\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484610 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-config-data\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484642 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484664 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484690 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.484688 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-log-httpd\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.485296 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-run-httpd\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.489938 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-config-data\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.491089 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-scripts\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.492067 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.494843 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.500578 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.507616 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vr8\" (UniqueName: \"kubernetes.io/projected/e9722f27-89ae-485b-ace7-3af2257bd5c5-kube-api-access-m5vr8\") pod \"ceilometer-0\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.586745 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:16 crc kubenswrapper[4704]: I0122 16:52:16.757306 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0468-account-delete-z5rw6"] Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.790837 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 22 16:52:16 crc kubenswrapper[4704]: E0122 16:52:16.790922 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data podName:07f21df6-be2e-4b70-99f7-0dca3af15451 nodeName:}" failed. No retries permitted until 2026-01-22 16:52:17.790902962 +0000 UTC m=+1430.435449662 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data") pod "watcher-kuttl-api-0" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451") : secret "watcher-kuttl-api-config-data" not found Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.169009 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.190783 4704 generic.go:334] "Generic (PLEG): container finished" podID="98a1a943-bfec-424c-b3c5-424afee63f63" containerID="72d8ecab972575ac425308b65ee55f9f77ae9838ea331957c66459d8ba740734" exitCode=0 Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.190878 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"98a1a943-bfec-424c-b3c5-424afee63f63","Type":"ContainerDied","Data":"72d8ecab972575ac425308b65ee55f9f77ae9838ea331957c66459d8ba740734"} Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.190909 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"98a1a943-bfec-424c-b3c5-424afee63f63","Type":"ContainerDied","Data":"55c80d15688c39dde17178cce428e69edbb22f31183d1769d190fa7cf08fbb51"} Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.190937 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c80d15688c39dde17178cce428e69edbb22f31183d1769d190fa7cf08fbb51" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.193913 4704 generic.go:334] "Generic (PLEG): container finished" podID="06378b9b-335c-44d7-9cb1-a53392766bf5" containerID="dfcd54f30faf33003ace5ad9f74039a0048395e7e2bc9eb554607164ff715205" exitCode=0 Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.193996 4704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.194138 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-kuttl-api-log" containerID="cri-o://c619f19b0ce81a78a880949e8be9c915874956fe4533c0b1b68e75ef194e1ed5" gracePeriod=30 Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.194428 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" event={"ID":"06378b9b-335c-44d7-9cb1-a53392766bf5","Type":"ContainerDied","Data":"dfcd54f30faf33003ace5ad9f74039a0048395e7e2bc9eb554607164ff715205"} Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.194461 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" event={"ID":"06378b9b-335c-44d7-9cb1-a53392766bf5","Type":"ContainerStarted","Data":"e8ceda18b48a80b1281935833e23920f7da11d2ad01d9f21c1930e2d5f84d011"} Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.194723 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-api" containerID="cri-o://f2f57dab3d5022f36d3a1fe98ffb265c0a42cd76c1ec19093546deffb12841e2" gracePeriod=30 Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.200683 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": EOF" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.243640 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.400513 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-custom-prometheus-ca\") pod \"98a1a943-bfec-424c-b3c5-424afee63f63\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.400595 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-combined-ca-bundle\") pod \"98a1a943-bfec-424c-b3c5-424afee63f63\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.400649 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98a1a943-bfec-424c-b3c5-424afee63f63-logs\") pod \"98a1a943-bfec-424c-b3c5-424afee63f63\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.400948 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r72d8\" (UniqueName: \"kubernetes.io/projected/98a1a943-bfec-424c-b3c5-424afee63f63-kube-api-access-r72d8\") pod \"98a1a943-bfec-424c-b3c5-424afee63f63\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.401024 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-config-data\") pod \"98a1a943-bfec-424c-b3c5-424afee63f63\" (UID: \"98a1a943-bfec-424c-b3c5-424afee63f63\") " Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.401160 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98a1a943-bfec-424c-b3c5-424afee63f63-logs" (OuterVolumeSpecName: "logs") pod "98a1a943-bfec-424c-b3c5-424afee63f63" (UID: "98a1a943-bfec-424c-b3c5-424afee63f63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.401815 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98a1a943-bfec-424c-b3c5-424afee63f63-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.406469 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a1a943-bfec-424c-b3c5-424afee63f63-kube-api-access-r72d8" (OuterVolumeSpecName: "kube-api-access-r72d8") pod "98a1a943-bfec-424c-b3c5-424afee63f63" (UID: "98a1a943-bfec-424c-b3c5-424afee63f63"). InnerVolumeSpecName "kube-api-access-r72d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.422729 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "98a1a943-bfec-424c-b3c5-424afee63f63" (UID: "98a1a943-bfec-424c-b3c5-424afee63f63"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.424176 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98a1a943-bfec-424c-b3c5-424afee63f63" (UID: "98a1a943-bfec-424c-b3c5-424afee63f63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.445138 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-config-data" (OuterVolumeSpecName: "config-data") pod "98a1a943-bfec-424c-b3c5-424afee63f63" (UID: "98a1a943-bfec-424c-b3c5-424afee63f63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.503158 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r72d8\" (UniqueName: \"kubernetes.io/projected/98a1a943-bfec-424c-b3c5-424afee63f63-kube-api-access-r72d8\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.503189 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.503198 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.503207 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a1a943-bfec-424c-b3c5-424afee63f63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.644872 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c285671-db10-4200-88df-18152de48011" path="/var/lib/kubelet/pods/8c285671-db10-4200-88df-18152de48011/volumes" Jan 22 16:52:17 crc kubenswrapper[4704]: I0122 16:52:17.645784 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8d7d10-d30b-4622-a446-99a2f2de9ddb" path="/var/lib/kubelet/pods/fc8d7d10-d30b-4622-a446-99a2f2de9ddb/volumes" Jan 22 16:52:17 crc kubenswrapper[4704]: E0122 16:52:17.810729 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 22 16:52:17 crc kubenswrapper[4704]: E0122 16:52:17.811158 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data podName:07f21df6-be2e-4b70-99f7-0dca3af15451 nodeName:}" failed. No retries permitted until 2026-01-22 16:52:19.811134275 +0000 UTC m=+1432.455681045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data") pod "watcher-kuttl-api-0" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451") : secret "watcher-kuttl-api-config-data" not found Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.199860 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.204432 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerStarted","Data":"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b"} Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.204479 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerStarted","Data":"747a49c6e524cb1676082281bad749bc07faedde37928ee3db528e37e50b52a8"} Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.208080 4704 generic.go:334] "Generic (PLEG): container finished" podID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerID="c619f19b0ce81a78a880949e8be9c915874956fe4533c0b1b68e75ef194e1ed5" exitCode=143 Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.208166 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.208169 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"07f21df6-be2e-4b70-99f7-0dca3af15451","Type":"ContainerDied","Data":"c619f19b0ce81a78a880949e8be9c915874956fe4533c0b1b68e75ef194e1ed5"} Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.240724 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.254380 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.693380 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:18 crc kubenswrapper[4704]: E0122 16:52:18.694915 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:18 crc kubenswrapper[4704]: E0122 16:52:18.699158 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:18 crc kubenswrapper[4704]: E0122 16:52:18.700224 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:52:18 crc kubenswrapper[4704]: E0122 16:52:18.700259 4704 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="7c432492-59d5-4a17-b5ee-698cf6dc32ac" containerName="watcher-applier" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.760768 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48rrx\" (UniqueName: \"kubernetes.io/projected/06378b9b-335c-44d7-9cb1-a53392766bf5-kube-api-access-48rrx\") pod \"06378b9b-335c-44d7-9cb1-a53392766bf5\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.760936 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06378b9b-335c-44d7-9cb1-a53392766bf5-operator-scripts\") pod \"06378b9b-335c-44d7-9cb1-a53392766bf5\" (UID: \"06378b9b-335c-44d7-9cb1-a53392766bf5\") " Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.761897 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06378b9b-335c-44d7-9cb1-a53392766bf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06378b9b-335c-44d7-9cb1-a53392766bf5" (UID: "06378b9b-335c-44d7-9cb1-a53392766bf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.773611 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06378b9b-335c-44d7-9cb1-a53392766bf5-kube-api-access-48rrx" (OuterVolumeSpecName: "kube-api-access-48rrx") pod "06378b9b-335c-44d7-9cb1-a53392766bf5" (UID: "06378b9b-335c-44d7-9cb1-a53392766bf5"). InnerVolumeSpecName "kube-api-access-48rrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.855126 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.862711 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48rrx\" (UniqueName: \"kubernetes.io/projected/06378b9b-335c-44d7-9cb1-a53392766bf5-kube-api-access-48rrx\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.862751 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06378b9b-335c-44d7-9cb1-a53392766bf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.970293 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": read tcp 10.217.0.2:50854->10.217.0.144:9322: read: connection reset by peer" Jan 22 16:52:18 crc kubenswrapper[4704]: I0122 16:52:18.970808 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": dial tcp 10.217.0.144:9322: connect: connection refused" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.135931 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.135980 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.236213 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerStarted","Data":"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4"} Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.252141 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.252169 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0468-account-delete-z5rw6" event={"ID":"06378b9b-335c-44d7-9cb1-a53392766bf5","Type":"ContainerDied","Data":"e8ceda18b48a80b1281935833e23920f7da11d2ad01d9f21c1930e2d5f84d011"} Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.252230 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ceda18b48a80b1281935833e23920f7da11d2ad01d9f21c1930e2d5f84d011" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.264086 4704 generic.go:334] "Generic (PLEG): container finished" podID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerID="f2f57dab3d5022f36d3a1fe98ffb265c0a42cd76c1ec19093546deffb12841e2" exitCode=0 Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.264127 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"07f21df6-be2e-4b70-99f7-0dca3af15451","Type":"ContainerDied","Data":"f2f57dab3d5022f36d3a1fe98ffb265c0a42cd76c1ec19093546deffb12841e2"} Jan 22 16:52:19 crc kubenswrapper[4704]: E0122 16:52:19.391078 4704 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06378b9b_335c_44d7_9cb1_a53392766bf5.slice/crio-e8ceda18b48a80b1281935833e23920f7da11d2ad01d9f21c1930e2d5f84d011\": RecentStats: unable to find data in memory cache]" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.481100 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.578952 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7877m\" (UniqueName: \"kubernetes.io/projected/07f21df6-be2e-4b70-99f7-0dca3af15451-kube-api-access-7877m\") pod \"07f21df6-be2e-4b70-99f7-0dca3af15451\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.579000 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-internal-tls-certs\") pod \"07f21df6-be2e-4b70-99f7-0dca3af15451\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.579020 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-custom-prometheus-ca\") pod \"07f21df6-be2e-4b70-99f7-0dca3af15451\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.579065 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data\") pod \"07f21df6-be2e-4b70-99f7-0dca3af15451\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.579115 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-combined-ca-bundle\") pod \"07f21df6-be2e-4b70-99f7-0dca3af15451\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.579167 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f21df6-be2e-4b70-99f7-0dca3af15451-logs\") pod \"07f21df6-be2e-4b70-99f7-0dca3af15451\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.579241 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-public-tls-certs\") pod \"07f21df6-be2e-4b70-99f7-0dca3af15451\" (UID: \"07f21df6-be2e-4b70-99f7-0dca3af15451\") " Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.579530 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f21df6-be2e-4b70-99f7-0dca3af15451-logs" (OuterVolumeSpecName: "logs") pod "07f21df6-be2e-4b70-99f7-0dca3af15451" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.585919 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f21df6-be2e-4b70-99f7-0dca3af15451-kube-api-access-7877m" (OuterVolumeSpecName: "kube-api-access-7877m") pod "07f21df6-be2e-4b70-99f7-0dca3af15451" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451"). InnerVolumeSpecName "kube-api-access-7877m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.610107 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "07f21df6-be2e-4b70-99f7-0dca3af15451" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.616883 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07f21df6-be2e-4b70-99f7-0dca3af15451" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.630274 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07f21df6-be2e-4b70-99f7-0dca3af15451" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.630942 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data" (OuterVolumeSpecName: "config-data") pod "07f21df6-be2e-4b70-99f7-0dca3af15451" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.633836 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07f21df6-be2e-4b70-99f7-0dca3af15451" (UID: "07f21df6-be2e-4b70-99f7-0dca3af15451"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.649815 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a1a943-bfec-424c-b3c5-424afee63f63" path="/var/lib/kubelet/pods/98a1a943-bfec-424c-b3c5-424afee63f63/volumes" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.681087 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.681111 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f21df6-be2e-4b70-99f7-0dca3af15451-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.681121 4704 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.681129 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7877m\" (UniqueName: \"kubernetes.io/projected/07f21df6-be2e-4b70-99f7-0dca3af15451-kube-api-access-7877m\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.681137 4704 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.681159 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:19 crc kubenswrapper[4704]: I0122 16:52:19.681168 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f21df6-be2e-4b70-99f7-0dca3af15451-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.274198 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"07f21df6-be2e-4b70-99f7-0dca3af15451","Type":"ContainerDied","Data":"cb1ae381b47e5f7dd0596b513cac7841fbee9ea4b6898d324224a24ad9a87d87"} Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.274254 4704 scope.go:117] "RemoveContainer" containerID="f2f57dab3d5022f36d3a1fe98ffb265c0a42cd76c1ec19093546deffb12841e2" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.274379 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.278056 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerStarted","Data":"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d"} Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.295762 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.300474 4704 scope.go:117] "RemoveContainer" containerID="c619f19b0ce81a78a880949e8be9c915874956fe4533c0b1b68e75ef194e1ed5" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.307507 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.849984 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.903916 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c432492-59d5-4a17-b5ee-698cf6dc32ac-logs\") pod \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.904338 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqnpx\" (UniqueName: \"kubernetes.io/projected/7c432492-59d5-4a17-b5ee-698cf6dc32ac-kube-api-access-dqnpx\") pod \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.904385 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-config-data\") pod \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.904408 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-combined-ca-bundle\") pod \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\" (UID: \"7c432492-59d5-4a17-b5ee-698cf6dc32ac\") " Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.904461 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c432492-59d5-4a17-b5ee-698cf6dc32ac-logs" (OuterVolumeSpecName: "logs") pod "7c432492-59d5-4a17-b5ee-698cf6dc32ac" (UID: "7c432492-59d5-4a17-b5ee-698cf6dc32ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.904762 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c432492-59d5-4a17-b5ee-698cf6dc32ac-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.912457 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c432492-59d5-4a17-b5ee-698cf6dc32ac-kube-api-access-dqnpx" (OuterVolumeSpecName: "kube-api-access-dqnpx") pod "7c432492-59d5-4a17-b5ee-698cf6dc32ac" (UID: "7c432492-59d5-4a17-b5ee-698cf6dc32ac"). InnerVolumeSpecName "kube-api-access-dqnpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.973365 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c432492-59d5-4a17-b5ee-698cf6dc32ac" (UID: "7c432492-59d5-4a17-b5ee-698cf6dc32ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.980116 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jl977"] Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.993612 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-config-data" (OuterVolumeSpecName: "config-data") pod "7c432492-59d5-4a17-b5ee-698cf6dc32ac" (UID: "7c432492-59d5-4a17-b5ee-698cf6dc32ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:20 crc kubenswrapper[4704]: I0122 16:52:20.997694 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jl977"] Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.006223 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqnpx\" (UniqueName: \"kubernetes.io/projected/7c432492-59d5-4a17-b5ee-698cf6dc32ac-kube-api-access-dqnpx\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.006253 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.006266 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c432492-59d5-4a17-b5ee-698cf6dc32ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.011139 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher0468-account-delete-z5rw6"] Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.022926 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher0468-account-delete-z5rw6"] Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.031307 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-0468-account-create-update-42rph"] Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.038846 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-0468-account-create-update-42rph"] Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.290829 4704 generic.go:334] "Generic (PLEG): container finished" podID="7c432492-59d5-4a17-b5ee-698cf6dc32ac" containerID="2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" exitCode=0 Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.290917 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7c432492-59d5-4a17-b5ee-698cf6dc32ac","Type":"ContainerDied","Data":"2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17"} Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.290944 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7c432492-59d5-4a17-b5ee-698cf6dc32ac","Type":"ContainerDied","Data":"5dadf05d35f281200cd7db00317184c446c923eb1e8faa63fa2a0f2c07111d72"} Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.290962 4704 scope.go:117] "RemoveContainer" containerID="2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.291919 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.297849 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerStarted","Data":"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5"} Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.297997 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-central-agent" containerID="cri-o://b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" gracePeriod=30 Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.298070 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="sg-core" containerID="cri-o://67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" gracePeriod=30 Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.298076 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="proxy-httpd" containerID="cri-o://648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" gracePeriod=30 Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.298068 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.298105 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-notification-agent" containerID="cri-o://19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" gracePeriod=30 Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.335024 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.845298377 podStartE2EDuration="5.335008237s" podCreationTimestamp="2026-01-22 16:52:16 +0000 UTC" firstStartedPulling="2026-01-22 16:52:17.189279299 +0000 UTC m=+1429.833825999" lastFinishedPulling="2026-01-22 16:52:20.678989159 +0000 UTC m=+1433.323535859" observedRunningTime="2026-01-22 16:52:21.325925679 +0000 UTC m=+1433.970472379" watchObservedRunningTime="2026-01-22 16:52:21.335008237 +0000 UTC m=+1433.979554937" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.336463 4704 scope.go:117] "RemoveContainer" containerID="2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" Jan 22 16:52:21 crc kubenswrapper[4704]: E0122 16:52:21.339954 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17\": container with ID starting with 2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17 not found: ID does not exist" containerID="2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.340006 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17"} err="failed to get container status \"2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17\": rpc error: code = NotFound desc = could not find container \"2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17\": container with ID starting with 2f5920cd80cee50acb1637e3a855261b2f0a4b06f51acbb88f4a77ad6e040b17 not found: ID does not exist" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.346434 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.351659 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.648970 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06378b9b-335c-44d7-9cb1-a53392766bf5" path="/var/lib/kubelet/pods/06378b9b-335c-44d7-9cb1-a53392766bf5/volumes" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.649730 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" path="/var/lib/kubelet/pods/07f21df6-be2e-4b70-99f7-0dca3af15451/volumes" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.651003 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50fc2fb9-9bc4-4f20-8258-2b471827216a" path="/var/lib/kubelet/pods/50fc2fb9-9bc4-4f20-8258-2b471827216a/volumes" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.653059 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c432492-59d5-4a17-b5ee-698cf6dc32ac" path="/var/lib/kubelet/pods/7c432492-59d5-4a17-b5ee-698cf6dc32ac/volumes" Jan 22 16:52:21 crc kubenswrapper[4704]: I0122 16:52:21.654098 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc8f4c0-e69d-48db-baf3-7d2ba6682898" path="/var/lib/kubelet/pods/7fc8f4c0-e69d-48db-baf3-7d2ba6682898/volumes" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.053399 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058495 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw"] Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058818 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-api" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058833 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-api" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058843 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-notification-agent" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058849 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-notification-agent" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058860 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a1a943-bfec-424c-b3c5-424afee63f63" containerName="watcher-decision-engine" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058866 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a1a943-bfec-424c-b3c5-424afee63f63" containerName="watcher-decision-engine" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058874 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06378b9b-335c-44d7-9cb1-a53392766bf5" containerName="mariadb-account-delete" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058879 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="06378b9b-335c-44d7-9cb1-a53392766bf5" containerName="mariadb-account-delete" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058894 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="proxy-httpd" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058899 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="proxy-httpd" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058908 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-kuttl-api-log" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058914 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-kuttl-api-log" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058924 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="sg-core" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058931 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="sg-core" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058945 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-central-agent" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058950 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-central-agent" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.058961 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c432492-59d5-4a17-b5ee-698cf6dc32ac" containerName="watcher-applier" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.058967 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c432492-59d5-4a17-b5ee-698cf6dc32ac" containerName="watcher-applier" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059113 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c432492-59d5-4a17-b5ee-698cf6dc32ac" containerName="watcher-applier" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059123 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="06378b9b-335c-44d7-9cb1-a53392766bf5" containerName="mariadb-account-delete" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059130 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a1a943-bfec-424c-b3c5-424afee63f63" containerName="watcher-decision-engine" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059138 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-central-agent" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059153 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="ceilometer-notification-agent" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059161 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="sg-core" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059170 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-api" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059181 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f21df6-be2e-4b70-99f7-0dca3af15451" containerName="watcher-kuttl-api-log" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059191 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerName="proxy-httpd" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.059683 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.061596 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.095037 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-qxd4w"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.096140 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.110745 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-qxd4w"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.116858 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.136912 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-sg-core-conf-yaml\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.136963 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-log-httpd\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137098 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-run-httpd\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137116 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-config-data\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137148 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5vr8\" (UniqueName: \"kubernetes.io/projected/e9722f27-89ae-485b-ace7-3af2257bd5c5-kube-api-access-m5vr8\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137173 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-combined-ca-bundle\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137193 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-scripts\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137237 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-ceilometer-tls-certs\") pod \"e9722f27-89ae-485b-ace7-3af2257bd5c5\" (UID: \"e9722f27-89ae-485b-ace7-3af2257bd5c5\") " Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137615 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxpk\" (UniqueName: \"kubernetes.io/projected/f6b2cb7a-380b-4064-b7fe-100955d2132e-kube-api-access-5bxpk\") pod \"watcher-cf37-account-create-update-4xlqw\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137667 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b065d253-835b-4186-b4f0-7b4cca0c0858-operator-scripts\") pod \"watcher-db-create-qxd4w\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137734 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9pb\" (UniqueName: \"kubernetes.io/projected/b065d253-835b-4186-b4f0-7b4cca0c0858-kube-api-access-qp9pb\") pod \"watcher-db-create-qxd4w\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.137755 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b2cb7a-380b-4064-b7fe-100955d2132e-operator-scripts\") pod \"watcher-cf37-account-create-update-4xlqw\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.142219 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.142531 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.176735 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9722f27-89ae-485b-ace7-3af2257bd5c5-kube-api-access-m5vr8" (OuterVolumeSpecName: "kube-api-access-m5vr8") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "kube-api-access-m5vr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.199203 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-scripts" (OuterVolumeSpecName: "scripts") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243664 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b065d253-835b-4186-b4f0-7b4cca0c0858-operator-scripts\") pod \"watcher-db-create-qxd4w\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243777 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp9pb\" (UniqueName: \"kubernetes.io/projected/b065d253-835b-4186-b4f0-7b4cca0c0858-kube-api-access-qp9pb\") pod \"watcher-db-create-qxd4w\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243820 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b2cb7a-380b-4064-b7fe-100955d2132e-operator-scripts\") pod \"watcher-cf37-account-create-update-4xlqw\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243880 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bxpk\" (UniqueName: \"kubernetes.io/projected/f6b2cb7a-380b-4064-b7fe-100955d2132e-kube-api-access-5bxpk\") pod \"watcher-cf37-account-create-update-4xlqw\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243955 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243969 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5vr8\" (UniqueName: \"kubernetes.io/projected/e9722f27-89ae-485b-ace7-3af2257bd5c5-kube-api-access-m5vr8\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243981 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.243991 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9722f27-89ae-485b-ace7-3af2257bd5c5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.244758 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b2cb7a-380b-4064-b7fe-100955d2132e-operator-scripts\") pod \"watcher-cf37-account-create-update-4xlqw\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.244774 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b065d253-835b-4186-b4f0-7b4cca0c0858-operator-scripts\") pod \"watcher-db-create-qxd4w\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.246038 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.271566 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp9pb\" (UniqueName: \"kubernetes.io/projected/b065d253-835b-4186-b4f0-7b4cca0c0858-kube-api-access-qp9pb\") pod \"watcher-db-create-qxd4w\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.271876 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bxpk\" (UniqueName: \"kubernetes.io/projected/f6b2cb7a-380b-4064-b7fe-100955d2132e-kube-api-access-5bxpk\") pod \"watcher-cf37-account-create-update-4xlqw\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.303319 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-config-data" (OuterVolumeSpecName: "config-data") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.304918 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308752 4704 generic.go:334] "Generic (PLEG): container finished" podID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerID="648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" exitCode=0 Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308786 4704 generic.go:334] "Generic (PLEG): container finished" podID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerID="67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" exitCode=2 Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308807 4704 generic.go:334] "Generic (PLEG): container finished" podID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerID="19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" exitCode=0 Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308814 4704 generic.go:334] "Generic (PLEG): container finished" podID="e9722f27-89ae-485b-ace7-3af2257bd5c5" containerID="b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" exitCode=0 Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308857 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerDied","Data":"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5"} Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308885 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerDied","Data":"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d"} Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308894 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerDied","Data":"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4"} Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308902 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerDied","Data":"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b"} Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308911 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e9722f27-89ae-485b-ace7-3af2257bd5c5","Type":"ContainerDied","Data":"747a49c6e524cb1676082281bad749bc07faedde37928ee3db528e37e50b52a8"} Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.308925 4704 scope.go:117] "RemoveContainer" containerID="648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.309050 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.331036 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e9722f27-89ae-485b-ace7-3af2257bd5c5" (UID: "e9722f27-89ae-485b-ace7-3af2257bd5c5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.339825 4704 scope.go:117] "RemoveContainer" containerID="67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.345619 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.345653 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.345665 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.345675 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9722f27-89ae-485b-ace7-3af2257bd5c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.372648 4704 scope.go:117] "RemoveContainer" containerID="19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.392158 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.412600 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.438023 4704 scope.go:117] "RemoveContainer" containerID="b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.480469 4704 scope.go:117] "RemoveContainer" containerID="648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.480884 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": container with ID starting with 648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5 not found: ID does not exist" containerID="648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.480998 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5"} err="failed to get container status \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": rpc error: code = NotFound desc = could not find container \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": container with ID starting with 648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.481075 4704 scope.go:117] "RemoveContainer" containerID="67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.481359 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": container with ID starting with 67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d not found: ID does not exist" containerID="67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.481380 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d"} err="failed to get container status \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": rpc error: code = NotFound desc = could not find container \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": container with ID starting with 67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.481393 4704 scope.go:117] "RemoveContainer" containerID="19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.481585 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": container with ID starting with 19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4 not found: ID does not exist" containerID="19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.481605 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4"} err="failed to get container status \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": rpc error: code = NotFound desc = could not find container \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": container with ID starting with 19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.481617 4704 scope.go:117] "RemoveContainer" containerID="b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" Jan 22 16:52:22 crc kubenswrapper[4704]: E0122 16:52:22.481776 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": container with ID starting with b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b not found: ID does not exist" containerID="b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.481875 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b"} err="failed to get container status \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": rpc error: code = NotFound desc = could not find container \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": container with ID starting with b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.481891 4704 scope.go:117] "RemoveContainer" containerID="648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482062 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5"} err="failed to get container status \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": rpc error: code = NotFound desc = could not find container \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": container with ID starting with 648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482081 4704 scope.go:117] "RemoveContainer" containerID="67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482315 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d"} err="failed to get container status \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": rpc error: code = NotFound desc = could not find container \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": container with ID starting with 67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482361 4704 scope.go:117] "RemoveContainer" containerID="19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482576 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4"} err="failed to get container status \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": rpc error: code = NotFound desc = could not find container \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": container with ID starting with 19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482601 4704 scope.go:117] "RemoveContainer" containerID="b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482763 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b"} err="failed to get container status \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": rpc error: code = NotFound desc = could not find container \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": container with ID starting with b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.482780 4704 scope.go:117] "RemoveContainer" containerID="648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.483016 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5"} err="failed to get container status \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": rpc error: code = NotFound desc = could not find container \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": container with ID starting with 648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.483040 4704 scope.go:117] "RemoveContainer" containerID="67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.483242 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d"} err="failed to get container status \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": rpc error: code = NotFound desc = could not find container \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": container with ID starting with 67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.483258 4704 scope.go:117] "RemoveContainer" containerID="19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.483970 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4"} err="failed to get container status \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": rpc error: code = NotFound desc = could not find container \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": container with ID starting with 19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.483992 4704 scope.go:117] "RemoveContainer" containerID="b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484178 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b"} err="failed to get container status \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": rpc error: code = NotFound desc = could not find container \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": container with ID starting with b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484196 4704 scope.go:117] "RemoveContainer" containerID="648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484393 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5"} err="failed to get container status \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": rpc error: code = NotFound desc = could not find container \"648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5\": container with ID starting with 648ee921c4db65551bce6735b1aca5185191b9d71baf1d41a10eacc82c9bc8f5 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484464 4704 scope.go:117] "RemoveContainer" containerID="67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484692 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d"} err="failed to get container status \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": rpc error: code = NotFound desc = could not find container \"67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d\": container with ID starting with 67ce94e69f6b801a7a1b1efb94e0dedb1c5af1cb387e589bf116d7feeb96795d not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484712 4704 scope.go:117] "RemoveContainer" containerID="19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484883 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4"} err="failed to get container status \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": rpc error: code = NotFound desc = could not find container \"19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4\": container with ID starting with 19767a48b276b877ff7ce38103bc48459468534e381018638411b555ada830e4 not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.484959 4704 scope.go:117] "RemoveContainer" containerID="b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.485166 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b"} err="failed to get container status \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": rpc error: code = NotFound desc = could not find container \"b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b\": container with ID starting with b83dbfc064348832c8df821b56ace1680bc444a212c855eb64bc8ebd57537b7b not found: ID does not exist" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.639814 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.654845 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.684300 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.686785 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.689852 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.690065 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.690892 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.691655 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754307 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754422 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnqsj\" (UniqueName: \"kubernetes.io/projected/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-kube-api-access-qnqsj\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754521 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-scripts\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754577 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-config-data\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754597 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754648 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-log-httpd\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754672 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-run-httpd\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.754732 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.851129 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw"] Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856603 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856694 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856726 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnqsj\" (UniqueName: \"kubernetes.io/projected/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-kube-api-access-qnqsj\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856774 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-scripts\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856807 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-config-data\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856824 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856841 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-log-httpd\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.856856 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-run-httpd\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.857216 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-run-httpd\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: W0122 16:52:22.857432 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6b2cb7a_380b_4064_b7fe_100955d2132e.slice/crio-34114a3a8e755764c2e289ad14d004eacc2185c6f02af27e15bbc57466ee840a WatchSource:0}: Error finding container 34114a3a8e755764c2e289ad14d004eacc2185c6f02af27e15bbc57466ee840a: Status 404 returned error can't find the container with id 34114a3a8e755764c2e289ad14d004eacc2185c6f02af27e15bbc57466ee840a Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.857917 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-log-httpd\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.862886 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.868744 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-scripts\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.868999 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.869193 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-config-data\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.872839 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.877567 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnqsj\" (UniqueName: \"kubernetes.io/projected/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-kube-api-access-qnqsj\") pod \"ceilometer-0\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:22 crc kubenswrapper[4704]: I0122 16:52:22.971534 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-qxd4w"] Jan 22 16:52:22 crc kubenswrapper[4704]: W0122 16:52:22.980560 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb065d253_835b_4186_b4f0_7b4cca0c0858.slice/crio-57d5165c11bf83a6c5760615598024c42e4d4b7d80f6e456720f5a7f6afad70d WatchSource:0}: Error finding container 57d5165c11bf83a6c5760615598024c42e4d4b7d80f6e456720f5a7f6afad70d: Status 404 returned error can't find the container with id 57d5165c11bf83a6c5760615598024c42e4d4b7d80f6e456720f5a7f6afad70d Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.008638 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.332414 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-qxd4w" event={"ID":"b065d253-835b-4186-b4f0-7b4cca0c0858","Type":"ContainerStarted","Data":"be4ac6d6e4ed96f67d644cde4a3b30ff6254ff429a947e9db3260e2ec1c9415c"} Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.332806 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-qxd4w" event={"ID":"b065d253-835b-4186-b4f0-7b4cca0c0858","Type":"ContainerStarted","Data":"57d5165c11bf83a6c5760615598024c42e4d4b7d80f6e456720f5a7f6afad70d"} Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.333655 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" event={"ID":"f6b2cb7a-380b-4064-b7fe-100955d2132e","Type":"ContainerStarted","Data":"e6b4beb1185b52c1b1447eb468bbf9a959ad5c8c15c89042fdabe3e6bd203014"} Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.333676 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" event={"ID":"f6b2cb7a-380b-4064-b7fe-100955d2132e","Type":"ContainerStarted","Data":"34114a3a8e755764c2e289ad14d004eacc2185c6f02af27e15bbc57466ee840a"} Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.347649 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-qxd4w" podStartSLOduration=1.347631232 podStartE2EDuration="1.347631232s" podCreationTimestamp="2026-01-22 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:23.346053558 +0000 UTC m=+1435.990600258" watchObservedRunningTime="2026-01-22 16:52:23.347631232 +0000 UTC m=+1435.992177932" Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.481589 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:23 crc kubenswrapper[4704]: W0122 16:52:23.487475 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98f0b80b_9e99_4f50_8d29_6d42391a4d0d.slice/crio-08aac390cbf56a2fd09c8ea2d168f7c51597154ce8e0d59cd30ec35b32140da4 WatchSource:0}: Error finding container 08aac390cbf56a2fd09c8ea2d168f7c51597154ce8e0d59cd30ec35b32140da4: Status 404 returned error can't find the container with id 08aac390cbf56a2fd09c8ea2d168f7c51597154ce8e0d59cd30ec35b32140da4 Jan 22 16:52:23 crc kubenswrapper[4704]: I0122 16:52:23.647210 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9722f27-89ae-485b-ace7-3af2257bd5c5" path="/var/lib/kubelet/pods/e9722f27-89ae-485b-ace7-3af2257bd5c5/volumes" Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.344066 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerStarted","Data":"a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4"} Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.344297 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerStarted","Data":"08aac390cbf56a2fd09c8ea2d168f7c51597154ce8e0d59cd30ec35b32140da4"} Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.345715 4704 generic.go:334] "Generic (PLEG): container finished" podID="b065d253-835b-4186-b4f0-7b4cca0c0858" containerID="be4ac6d6e4ed96f67d644cde4a3b30ff6254ff429a947e9db3260e2ec1c9415c" exitCode=0 Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.345759 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-qxd4w" event={"ID":"b065d253-835b-4186-b4f0-7b4cca0c0858","Type":"ContainerDied","Data":"be4ac6d6e4ed96f67d644cde4a3b30ff6254ff429a947e9db3260e2ec1c9415c"} Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.347688 4704 generic.go:334] "Generic (PLEG): container finished" podID="f6b2cb7a-380b-4064-b7fe-100955d2132e" containerID="e6b4beb1185b52c1b1447eb468bbf9a959ad5c8c15c89042fdabe3e6bd203014" exitCode=0 Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.347716 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" event={"ID":"f6b2cb7a-380b-4064-b7fe-100955d2132e","Type":"ContainerDied","Data":"e6b4beb1185b52c1b1447eb468bbf9a959ad5c8c15c89042fdabe3e6bd203014"} Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.678613 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.786332 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b2cb7a-380b-4064-b7fe-100955d2132e-operator-scripts\") pod \"f6b2cb7a-380b-4064-b7fe-100955d2132e\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.786538 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bxpk\" (UniqueName: \"kubernetes.io/projected/f6b2cb7a-380b-4064-b7fe-100955d2132e-kube-api-access-5bxpk\") pod \"f6b2cb7a-380b-4064-b7fe-100955d2132e\" (UID: \"f6b2cb7a-380b-4064-b7fe-100955d2132e\") " Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.787314 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b2cb7a-380b-4064-b7fe-100955d2132e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6b2cb7a-380b-4064-b7fe-100955d2132e" (UID: "f6b2cb7a-380b-4064-b7fe-100955d2132e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.802667 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b2cb7a-380b-4064-b7fe-100955d2132e-kube-api-access-5bxpk" (OuterVolumeSpecName: "kube-api-access-5bxpk") pod "f6b2cb7a-380b-4064-b7fe-100955d2132e" (UID: "f6b2cb7a-380b-4064-b7fe-100955d2132e"). InnerVolumeSpecName "kube-api-access-5bxpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.887979 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bxpk\" (UniqueName: \"kubernetes.io/projected/f6b2cb7a-380b-4064-b7fe-100955d2132e-kube-api-access-5bxpk\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:24 crc kubenswrapper[4704]: I0122 16:52:24.888012 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b2cb7a-380b-4064-b7fe-100955d2132e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.358010 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" event={"ID":"f6b2cb7a-380b-4064-b7fe-100955d2132e","Type":"ContainerDied","Data":"34114a3a8e755764c2e289ad14d004eacc2185c6f02af27e15bbc57466ee840a"} Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.358265 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34114a3a8e755764c2e289ad14d004eacc2185c6f02af27e15bbc57466ee840a" Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.358310 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw" Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.368841 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerStarted","Data":"56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320"} Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.709024 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.842961 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b065d253-835b-4186-b4f0-7b4cca0c0858-operator-scripts\") pod \"b065d253-835b-4186-b4f0-7b4cca0c0858\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.843198 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp9pb\" (UniqueName: \"kubernetes.io/projected/b065d253-835b-4186-b4f0-7b4cca0c0858-kube-api-access-qp9pb\") pod \"b065d253-835b-4186-b4f0-7b4cca0c0858\" (UID: \"b065d253-835b-4186-b4f0-7b4cca0c0858\") " Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.843549 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b065d253-835b-4186-b4f0-7b4cca0c0858-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b065d253-835b-4186-b4f0-7b4cca0c0858" (UID: "b065d253-835b-4186-b4f0-7b4cca0c0858"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.843980 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b065d253-835b-4186-b4f0-7b4cca0c0858-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.849950 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b065d253-835b-4186-b4f0-7b4cca0c0858-kube-api-access-qp9pb" (OuterVolumeSpecName: "kube-api-access-qp9pb") pod "b065d253-835b-4186-b4f0-7b4cca0c0858" (UID: "b065d253-835b-4186-b4f0-7b4cca0c0858"). InnerVolumeSpecName "kube-api-access-qp9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:25 crc kubenswrapper[4704]: I0122 16:52:25.945691 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp9pb\" (UniqueName: \"kubernetes.io/projected/b065d253-835b-4186-b4f0-7b4cca0c0858-kube-api-access-qp9pb\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:26 crc kubenswrapper[4704]: I0122 16:52:26.397723 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-qxd4w" Jan 22 16:52:26 crc kubenswrapper[4704]: I0122 16:52:26.397724 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-qxd4w" event={"ID":"b065d253-835b-4186-b4f0-7b4cca0c0858","Type":"ContainerDied","Data":"57d5165c11bf83a6c5760615598024c42e4d4b7d80f6e456720f5a7f6afad70d"} Jan 22 16:52:26 crc kubenswrapper[4704]: I0122 16:52:26.399928 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57d5165c11bf83a6c5760615598024c42e4d4b7d80f6e456720f5a7f6afad70d" Jan 22 16:52:26 crc kubenswrapper[4704]: I0122 16:52:26.405391 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerStarted","Data":"fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329"} Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.341975 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w"] Jan 22 16:52:27 crc kubenswrapper[4704]: E0122 16:52:27.342566 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b065d253-835b-4186-b4f0-7b4cca0c0858" containerName="mariadb-database-create" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.342584 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b065d253-835b-4186-b4f0-7b4cca0c0858" containerName="mariadb-database-create" Jan 22 16:52:27 crc kubenswrapper[4704]: E0122 16:52:27.342604 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b2cb7a-380b-4064-b7fe-100955d2132e" containerName="mariadb-account-create-update" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.342611 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b2cb7a-380b-4064-b7fe-100955d2132e" containerName="mariadb-account-create-update" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.342751 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="b065d253-835b-4186-b4f0-7b4cca0c0858" containerName="mariadb-database-create" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.342772 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b2cb7a-380b-4064-b7fe-100955d2132e" containerName="mariadb-account-create-update" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.343267 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.349824 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.350271 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6shn7" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.362283 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w"] Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.413480 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerStarted","Data":"2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5"} Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.414452 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.440340 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.020856491 podStartE2EDuration="5.440321442s" podCreationTimestamp="2026-01-22 16:52:22 +0000 UTC" firstStartedPulling="2026-01-22 16:52:23.489623503 +0000 UTC m=+1436.134170203" lastFinishedPulling="2026-01-22 16:52:26.909088454 +0000 UTC m=+1439.553635154" observedRunningTime="2026-01-22 16:52:27.433677413 +0000 UTC m=+1440.078224113" watchObservedRunningTime="2026-01-22 16:52:27.440321442 +0000 UTC m=+1440.084868142" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.470031 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.470097 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.470188 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5shk\" (UniqueName: \"kubernetes.io/projected/5b0b1107-1d1c-4907-b3ab-e4121d83335f-kube-api-access-j5shk\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.470205 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-config-data\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.571356 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5shk\" (UniqueName: \"kubernetes.io/projected/5b0b1107-1d1c-4907-b3ab-e4121d83335f-kube-api-access-j5shk\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.571396 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-config-data\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.571474 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.571495 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.576448 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-config-data\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.576821 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.577086 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.590080 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5shk\" (UniqueName: \"kubernetes.io/projected/5b0b1107-1d1c-4907-b3ab-e4121d83335f-kube-api-access-j5shk\") pod \"watcher-kuttl-db-sync-f2t5w\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.659762 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6shn7" Jan 22 16:52:27 crc kubenswrapper[4704]: I0122 16:52:27.675110 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:28 crc kubenswrapper[4704]: I0122 16:52:28.137512 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w"] Jan 22 16:52:28 crc kubenswrapper[4704]: I0122 16:52:28.424909 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" event={"ID":"5b0b1107-1d1c-4907-b3ab-e4121d83335f","Type":"ContainerStarted","Data":"c0d03b0a34ae3554163e9f1ad62099484875c3c3e58f94ebd7258641a4d1aa19"} Jan 22 16:52:28 crc kubenswrapper[4704]: I0122 16:52:28.424972 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" event={"ID":"5b0b1107-1d1c-4907-b3ab-e4121d83335f","Type":"ContainerStarted","Data":"bd0a5dac84cfcbf179526e70d295543739500fbd6f94e0c20d092a0606e8077f"} Jan 22 16:52:28 crc kubenswrapper[4704]: I0122 16:52:28.454637 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" podStartSLOduration=1.454613505 podStartE2EDuration="1.454613505s" podCreationTimestamp="2026-01-22 16:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:28.45335108 +0000 UTC m=+1441.097897790" watchObservedRunningTime="2026-01-22 16:52:28.454613505 +0000 UTC m=+1441.099160205" Jan 22 16:52:31 crc kubenswrapper[4704]: I0122 16:52:31.451060 4704 generic.go:334] "Generic (PLEG): container finished" podID="5b0b1107-1d1c-4907-b3ab-e4121d83335f" containerID="c0d03b0a34ae3554163e9f1ad62099484875c3c3e58f94ebd7258641a4d1aa19" exitCode=0 Jan 22 16:52:31 crc kubenswrapper[4704]: I0122 16:52:31.451160 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" event={"ID":"5b0b1107-1d1c-4907-b3ab-e4121d83335f","Type":"ContainerDied","Data":"c0d03b0a34ae3554163e9f1ad62099484875c3c3e58f94ebd7258641a4d1aa19"} Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.863406 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.954132 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5shk\" (UniqueName: \"kubernetes.io/projected/5b0b1107-1d1c-4907-b3ab-e4121d83335f-kube-api-access-j5shk\") pod \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.954244 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-combined-ca-bundle\") pod \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.954319 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-config-data\") pod \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.954430 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-db-sync-config-data\") pod \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\" (UID: \"5b0b1107-1d1c-4907-b3ab-e4121d83335f\") " Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.967122 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0b1107-1d1c-4907-b3ab-e4121d83335f-kube-api-access-j5shk" (OuterVolumeSpecName: "kube-api-access-j5shk") pod "5b0b1107-1d1c-4907-b3ab-e4121d83335f" (UID: "5b0b1107-1d1c-4907-b3ab-e4121d83335f"). InnerVolumeSpecName "kube-api-access-j5shk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.974673 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5b0b1107-1d1c-4907-b3ab-e4121d83335f" (UID: "5b0b1107-1d1c-4907-b3ab-e4121d83335f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.979935 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b0b1107-1d1c-4907-b3ab-e4121d83335f" (UID: "5b0b1107-1d1c-4907-b3ab-e4121d83335f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:32 crc kubenswrapper[4704]: I0122 16:52:32.997182 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-config-data" (OuterVolumeSpecName: "config-data") pod "5b0b1107-1d1c-4907-b3ab-e4121d83335f" (UID: "5b0b1107-1d1c-4907-b3ab-e4121d83335f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.056683 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.056737 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5shk\" (UniqueName: \"kubernetes.io/projected/5b0b1107-1d1c-4907-b3ab-e4121d83335f-kube-api-access-j5shk\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.056759 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.056776 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0b1107-1d1c-4907-b3ab-e4121d83335f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.470322 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" event={"ID":"5b0b1107-1d1c-4907-b3ab-e4121d83335f","Type":"ContainerDied","Data":"bd0a5dac84cfcbf179526e70d295543739500fbd6f94e0c20d092a0606e8077f"} Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.470733 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd0a5dac84cfcbf179526e70d295543739500fbd6f94e0c20d092a0606e8077f" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.470417 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.742848 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:33 crc kubenswrapper[4704]: E0122 16:52:33.743304 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0b1107-1d1c-4907-b3ab-e4121d83335f" containerName="watcher-kuttl-db-sync" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.743335 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0b1107-1d1c-4907-b3ab-e4121d83335f" containerName="watcher-kuttl-db-sync" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.743567 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0b1107-1d1c-4907-b3ab-e4121d83335f" containerName="watcher-kuttl-db-sync" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.744552 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.747810 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.748183 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6shn7" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.748244 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.748249 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.759297 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.840187 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.841733 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.843406 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.846928 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.872222 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9cd\" (UniqueName: \"kubernetes.io/projected/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-kube-api-access-np9cd\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.872263 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.872299 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.872572 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.872651 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.872709 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.872877 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.888616 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.890003 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.892511 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.900261 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974253 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t55zk\" (UniqueName: \"kubernetes.io/projected/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-kube-api-access-t55zk\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974304 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974333 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974359 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974383 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974405 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsk8n\" (UniqueName: \"kubernetes.io/projected/f56533f4-37eb-4950-bc3c-71a536f51479-kube-api-access-xsk8n\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974433 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974512 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974568 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974642 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f56533f4-37eb-4950-bc3c-71a536f51479-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974685 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974739 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np9cd\" (UniqueName: \"kubernetes.io/projected/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-kube-api-access-np9cd\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974772 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974831 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974842 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974893 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.974912 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.979471 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.980015 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.981377 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.982382 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.989940 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:33 crc kubenswrapper[4704]: I0122 16:52:33.994463 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np9cd\" (UniqueName: \"kubernetes.io/projected/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-kube-api-access-np9cd\") pod \"watcher-kuttl-api-0\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.067771 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.076763 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f56533f4-37eb-4950-bc3c-71a536f51479-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.076858 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.076897 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.076943 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t55zk\" (UniqueName: \"kubernetes.io/projected/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-kube-api-access-t55zk\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.076961 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.076976 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.076996 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.077022 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.077041 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsk8n\" (UniqueName: \"kubernetes.io/projected/f56533f4-37eb-4950-bc3c-71a536f51479-kube-api-access-xsk8n\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.077283 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f56533f4-37eb-4950-bc3c-71a536f51479-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.081521 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.081692 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.083604 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.084245 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.086219 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.086607 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.097413 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsk8n\" (UniqueName: \"kubernetes.io/projected/f56533f4-37eb-4950-bc3c-71a536f51479-kube-api-access-xsk8n\") pod \"watcher-kuttl-applier-0\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.103359 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t55zk\" (UniqueName: \"kubernetes.io/projected/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-kube-api-access-t55zk\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.159132 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.211381 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.524391 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.659444 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:52:34 crc kubenswrapper[4704]: I0122 16:52:34.826504 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:52:34 crc kubenswrapper[4704]: W0122 16:52:34.827183 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf56533f4_37eb_4950_bc3c_71a536f51479.slice/crio-535aac1115f2d93b772112071e397cae842a6e5068d81b130401c9eaad3ebee7 WatchSource:0}: Error finding container 535aac1115f2d93b772112071e397cae842a6e5068d81b130401c9eaad3ebee7: Status 404 returned error can't find the container with id 535aac1115f2d93b772112071e397cae842a6e5068d81b130401c9eaad3ebee7 Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.489497 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f56533f4-37eb-4950-bc3c-71a536f51479","Type":"ContainerStarted","Data":"88a2bdc3ed9bbcf9acdfee37b2939486a9244275d9acd2973323817bdf59ced2"} Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.491627 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f56533f4-37eb-4950-bc3c-71a536f51479","Type":"ContainerStarted","Data":"535aac1115f2d93b772112071e397cae842a6e5068d81b130401c9eaad3ebee7"} Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.494627 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5a7da9f5-25ae-4b0e-8ece-f532ed21281d","Type":"ContainerStarted","Data":"56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24"} Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.494690 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5a7da9f5-25ae-4b0e-8ece-f532ed21281d","Type":"ContainerStarted","Data":"ffb79b2e2bf55b37212295b559e113e3af02e012d9b7281af7158c9ee07f96c8"} Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.496741 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"aecead70-3acf-4c4c-99c0-e9ce3c8e867b","Type":"ContainerStarted","Data":"148395564a14f81268688b367327d17a84a49e4293a9808b49e17360dc713418"} Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.496785 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"aecead70-3acf-4c4c-99c0-e9ce3c8e867b","Type":"ContainerStarted","Data":"b81f06af084332242a4d7f33cee1ba0df46e60ddbeacc293c0407f06f08f214a"} Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.496811 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"aecead70-3acf-4c4c-99c0-e9ce3c8e867b","Type":"ContainerStarted","Data":"e5e048b5aeaa8d66d4c6ea786d835462bbf6c542b6c878da7be394a5c90cec7a"} Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.497330 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.511260 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.511242533 podStartE2EDuration="2.511242533s" podCreationTimestamp="2026-01-22 16:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:35.504881482 +0000 UTC m=+1448.149428182" watchObservedRunningTime="2026-01-22 16:52:35.511242533 +0000 UTC m=+1448.155789233" Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.531082 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.531066347 podStartE2EDuration="2.531066347s" podCreationTimestamp="2026-01-22 16:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:35.525018585 +0000 UTC m=+1448.169565275" watchObservedRunningTime="2026-01-22 16:52:35.531066347 +0000 UTC m=+1448.175613047" Jan 22 16:52:35 crc kubenswrapper[4704]: I0122 16:52:35.544756 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.544736746 podStartE2EDuration="2.544736746s" podCreationTimestamp="2026-01-22 16:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:52:35.540146925 +0000 UTC m=+1448.184693625" watchObservedRunningTime="2026-01-22 16:52:35.544736746 +0000 UTC m=+1448.189283446" Jan 22 16:52:37 crc kubenswrapper[4704]: I0122 16:52:37.513952 4704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:52:37 crc kubenswrapper[4704]: I0122 16:52:37.912070 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:39 crc kubenswrapper[4704]: I0122 16:52:39.068249 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:39 crc kubenswrapper[4704]: I0122 16:52:39.212427 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.068920 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.090749 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.159862 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.188143 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.213272 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.256488 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.568256 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.597057 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.597135 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:52:44 crc kubenswrapper[4704]: I0122 16:52:44.614034 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.155048 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.155663 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-central-agent" containerID="cri-o://a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4" gracePeriod=30 Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.155775 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="proxy-httpd" containerID="cri-o://2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5" gracePeriod=30 Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.155822 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-notification-agent" containerID="cri-o://56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320" gracePeriod=30 Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.155776 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="sg-core" containerID="cri-o://fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329" gracePeriod=30 Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.264917 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.149:3000/\": read tcp 10.217.0.2:49160->10.217.0.149:3000: read: connection reset by peer" Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.606675 4704 generic.go:334] "Generic (PLEG): container finished" podID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerID="2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5" exitCode=0 Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.606709 4704 generic.go:334] "Generic (PLEG): container finished" podID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerID="fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329" exitCode=2 Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.606719 4704 generic.go:334] "Generic (PLEG): container finished" podID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerID="a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4" exitCode=0 Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.606742 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerDied","Data":"2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5"} Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.606818 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerDied","Data":"fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329"} Jan 22 16:52:46 crc kubenswrapper[4704]: I0122 16:52:46.606833 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerDied","Data":"a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4"} Jan 22 16:52:49 crc kubenswrapper[4704]: I0122 16:52:49.085887 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:52:49 crc kubenswrapper[4704]: I0122 16:52:49.086357 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.384560 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.483693 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-combined-ca-bundle\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.483749 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-run-httpd\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.483831 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-config-data\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.483856 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-ceilometer-tls-certs\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.483923 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-log-httpd\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.483970 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-sg-core-conf-yaml\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.483995 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnqsj\" (UniqueName: \"kubernetes.io/projected/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-kube-api-access-qnqsj\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.484023 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-scripts\") pod \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\" (UID: \"98f0b80b-9e99-4f50-8d29-6d42391a4d0d\") " Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.484338 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.484632 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.489182 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-kube-api-access-qnqsj" (OuterVolumeSpecName: "kube-api-access-qnqsj") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "kube-api-access-qnqsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.496636 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-scripts" (OuterVolumeSpecName: "scripts") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.510050 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.534313 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.549996 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.571107 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-config-data" (OuterVolumeSpecName: "config-data") pod "98f0b80b-9e99-4f50-8d29-6d42391a4d0d" (UID: "98f0b80b-9e99-4f50-8d29-6d42391a4d0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.585873 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnqsj\" (UniqueName: \"kubernetes.io/projected/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-kube-api-access-qnqsj\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.586095 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.586158 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.586213 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.586273 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.586332 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.586385 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.586437 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98f0b80b-9e99-4f50-8d29-6d42391a4d0d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.657813 4704 generic.go:334] "Generic (PLEG): container finished" podID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerID="56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320" exitCode=0 Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.657860 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerDied","Data":"56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320"} Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.657920 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"98f0b80b-9e99-4f50-8d29-6d42391a4d0d","Type":"ContainerDied","Data":"08aac390cbf56a2fd09c8ea2d168f7c51597154ce8e0d59cd30ec35b32140da4"} Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.657941 4704 scope.go:117] "RemoveContainer" containerID="2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.657960 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.691055 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.699878 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.700881 4704 scope.go:117] "RemoveContainer" containerID="fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.712994 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.713399 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="proxy-httpd" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713416 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="proxy-httpd" Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.713445 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-central-agent" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713456 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-central-agent" Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.713474 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="sg-core" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713482 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="sg-core" Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.713498 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-notification-agent" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713519 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-notification-agent" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713706 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-notification-agent" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713725 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="ceilometer-central-agent" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713743 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="proxy-httpd" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.713758 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" containerName="sg-core" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.719792 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.727010 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.727767 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.728066 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.728391 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.754210 4704 scope.go:117] "RemoveContainer" containerID="56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.776851 4704 scope.go:117] "RemoveContainer" containerID="a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.794720 4704 scope.go:117] "RemoveContainer" containerID="2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5" Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.795264 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5\": container with ID starting with 2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5 not found: ID does not exist" containerID="2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.795306 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5"} err="failed to get container status \"2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5\": rpc error: code = NotFound desc = could not find container \"2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5\": container with ID starting with 2747e533d5541c2e928835770efdda2d5729a4a3b621117ffcdad9552726cbf5 not found: ID does not exist" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.795332 4704 scope.go:117] "RemoveContainer" containerID="fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796199 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-run-httpd\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796245 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796267 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796285 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-log-httpd\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796299 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796322 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-config-data\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796400 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882hv\" (UniqueName: \"kubernetes.io/projected/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-kube-api-access-882hv\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796420 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-scripts\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.796722 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329\": container with ID starting with fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329 not found: ID does not exist" containerID="fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796745 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329"} err="failed to get container status \"fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329\": rpc error: code = NotFound desc = could not find container \"fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329\": container with ID starting with fe764e121689ca1f32a70a48833cb864669f828016fcc790470c4e01894f7329 not found: ID does not exist" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.796759 4704 scope.go:117] "RemoveContainer" containerID="56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320" Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.797181 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320\": container with ID starting with 56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320 not found: ID does not exist" containerID="56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.797205 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320"} err="failed to get container status \"56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320\": rpc error: code = NotFound desc = could not find container \"56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320\": container with ID starting with 56d21634e05739ea7ec16ce1090140e59f3179e604dcb7064d668110fafb6320 not found: ID does not exist" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.797218 4704 scope.go:117] "RemoveContainer" containerID="a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4" Jan 22 16:52:51 crc kubenswrapper[4704]: E0122 16:52:51.798161 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4\": container with ID starting with a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4 not found: ID does not exist" containerID="a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.798180 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4"} err="failed to get container status \"a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4\": rpc error: code = NotFound desc = could not find container \"a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4\": container with ID starting with a10fed014810fa5cc20b5615ca79bc2d1f26faaebac1387842be31beae8005a4 not found: ID does not exist" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898416 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898484 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898513 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-log-httpd\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898537 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898572 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-config-data\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898680 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882hv\" (UniqueName: \"kubernetes.io/projected/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-kube-api-access-882hv\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898707 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-scripts\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.898739 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-run-httpd\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.899360 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-run-httpd\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.901472 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-log-httpd\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.903851 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-scripts\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.904123 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.904523 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.904747 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.905132 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-config-data\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:51 crc kubenswrapper[4704]: I0122 16:52:51.922221 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882hv\" (UniqueName: \"kubernetes.io/projected/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-kube-api-access-882hv\") pod \"ceilometer-0\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:52 crc kubenswrapper[4704]: I0122 16:52:52.056328 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:52 crc kubenswrapper[4704]: I0122 16:52:52.561853 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:52:52 crc kubenswrapper[4704]: W0122 16:52:52.568597 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc68aa7a9_44fe_4b4e_9d75_ed820d48f4c6.slice/crio-b401c0d0bad888225303013abf62447a1e6895c013c457ddc3c656dd7512266b WatchSource:0}: Error finding container b401c0d0bad888225303013abf62447a1e6895c013c457ddc3c656dd7512266b: Status 404 returned error can't find the container with id b401c0d0bad888225303013abf62447a1e6895c013c457ddc3c656dd7512266b Jan 22 16:52:52 crc kubenswrapper[4704]: I0122 16:52:52.665389 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerStarted","Data":"b401c0d0bad888225303013abf62447a1e6895c013c457ddc3c656dd7512266b"} Jan 22 16:52:53 crc kubenswrapper[4704]: I0122 16:52:53.653276 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f0b80b-9e99-4f50-8d29-6d42391a4d0d" path="/var/lib/kubelet/pods/98f0b80b-9e99-4f50-8d29-6d42391a4d0d/volumes" Jan 22 16:52:53 crc kubenswrapper[4704]: I0122 16:52:53.685682 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerStarted","Data":"a975340f59dcdf6686e7248c8922d9e110c0c249823a7a0f35da568eef0316ec"} Jan 22 16:52:54 crc kubenswrapper[4704]: I0122 16:52:54.696426 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerStarted","Data":"145bd0203c50b353e17627436ffe57403e24272c83d58a422f24df2b32cdbafd"} Jan 22 16:52:54 crc kubenswrapper[4704]: I0122 16:52:54.696481 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerStarted","Data":"0e96c5b269ebf00d63a9c5928e623bd406f2d73f606ab848f61e69986da9d2b3"} Jan 22 16:52:56 crc kubenswrapper[4704]: I0122 16:52:56.716550 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerStarted","Data":"170065711be2555112f048dba1c4d9a5d83587ab8b1125ad13a2f25ed378fc89"} Jan 22 16:52:56 crc kubenswrapper[4704]: I0122 16:52:56.720167 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:52:56 crc kubenswrapper[4704]: I0122 16:52:56.750676 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.7331308869999997 podStartE2EDuration="5.750631308s" podCreationTimestamp="2026-01-22 16:52:51 +0000 UTC" firstStartedPulling="2026-01-22 16:52:52.571018986 +0000 UTC m=+1465.215565686" lastFinishedPulling="2026-01-22 16:52:55.588519387 +0000 UTC m=+1468.233066107" observedRunningTime="2026-01-22 16:52:56.735452726 +0000 UTC m=+1469.379999476" watchObservedRunningTime="2026-01-22 16:52:56.750631308 +0000 UTC m=+1469.395200549" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.387787 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.388385 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f56533f4-37eb-4950-bc3c-71a536f51479" containerName="watcher-applier" containerID="cri-o://88a2bdc3ed9bbcf9acdfee37b2939486a9244275d9acd2973323817bdf59ced2" gracePeriod=30 Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.418467 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.418735 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="5a7da9f5-25ae-4b0e-8ece-f532ed21281d" containerName="watcher-decision-engine" containerID="cri-o://56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24" gracePeriod=30 Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.428148 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.428360 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" containerName="memcached" containerID="cri-o://5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb" gracePeriod=30 Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.527842 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.528209 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-api" containerID="cri-o://148395564a14f81268688b367327d17a84a49e4293a9808b49e17360dc713418" gracePeriod=30 Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.528487 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-kuttl-api-log" containerID="cri-o://b81f06af084332242a4d7f33cee1ba0df46e60ddbeacc293c0407f06f08f214a" gracePeriod=30 Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.584662 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-jmxpt"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.593740 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-jmxpt"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.645554 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6966404-cd7f-426d-ab2e-f7a6cf2c8959" path="/var/lib/kubelet/pods/e6966404-cd7f-426d-ab2e-f7a6cf2c8959/volumes" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.697697 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hrhmn"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.699024 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.702504 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.702504 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.710234 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hrhmn"] Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.740399 4704 generic.go:334] "Generic (PLEG): container finished" podID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerID="b81f06af084332242a4d7f33cee1ba0df46e60ddbeacc293c0407f06f08f214a" exitCode=143 Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.740447 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"aecead70-3acf-4c4c-99c0-e9ce3c8e867b","Type":"ContainerDied","Data":"b81f06af084332242a4d7f33cee1ba0df46e60ddbeacc293c0407f06f08f214a"} Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.760041 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w845x\" (UniqueName: \"kubernetes.io/projected/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-kube-api-access-w845x\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.760254 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-config-data\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.760275 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-combined-ca-bundle\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.760383 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-cert-memcached-mtls\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.760441 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-scripts\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.760568 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-credential-keys\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.760615 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-fernet-keys\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.862214 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w845x\" (UniqueName: \"kubernetes.io/projected/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-kube-api-access-w845x\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.862332 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-config-data\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.862362 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-combined-ca-bundle\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.862396 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-cert-memcached-mtls\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.862436 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-scripts\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.862485 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-credential-keys\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.862526 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-fernet-keys\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.870128 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-config-data\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.870991 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-scripts\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.871239 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-credential-keys\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.873180 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-fernet-keys\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.873325 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-cert-memcached-mtls\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.881560 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-combined-ca-bundle\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:52:59 crc kubenswrapper[4704]: I0122 16:52:59.892503 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w845x\" (UniqueName: \"kubernetes.io/projected/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-kube-api-access-w845x\") pod \"keystone-bootstrap-hrhmn\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.035981 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.487160 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.679270 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmb9b\" (UniqueName: \"kubernetes.io/projected/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kube-api-access-dmb9b\") pod \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.679351 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kolla-config\") pod \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.679440 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-config-data\") pod \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.679499 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-combined-ca-bundle\") pod \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.679549 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-memcached-tls-certs\") pod \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\" (UID: \"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8\") " Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.680698 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" (UID: "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.681144 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-config-data" (OuterVolumeSpecName: "config-data") pod "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" (UID: "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.692056 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kube-api-access-dmb9b" (OuterVolumeSpecName: "kube-api-access-dmb9b") pod "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" (UID: "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8"). InnerVolumeSpecName "kube-api-access-dmb9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.769751 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hrhmn"] Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.775968 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" (UID: "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.776101 4704 generic.go:334] "Generic (PLEG): container finished" podID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerID="148395564a14f81268688b367327d17a84a49e4293a9808b49e17360dc713418" exitCode=0 Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.776155 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"aecead70-3acf-4c4c-99c0-e9ce3c8e867b","Type":"ContainerDied","Data":"148395564a14f81268688b367327d17a84a49e4293a9808b49e17360dc713418"} Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.776295 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" (UID: "9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.782333 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.782365 4704 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.782377 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmb9b\" (UniqueName: \"kubernetes.io/projected/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kube-api-access-dmb9b\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.782391 4704 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.782400 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.784640 4704 generic.go:334] "Generic (PLEG): container finished" podID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" containerID="5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb" exitCode=0 Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.785030 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8","Type":"ContainerDied","Data":"5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb"} Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.785060 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8","Type":"ContainerDied","Data":"c0bc8bba451a33c1946d6842a0b2edb118885d7290a7fff46ce36aea5809568f"} Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.785081 4704 scope.go:117] "RemoveContainer" containerID="5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.785219 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.879223 4704 scope.go:117] "RemoveContainer" containerID="5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb" Jan 22 16:53:00 crc kubenswrapper[4704]: E0122 16:53:00.882591 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb\": container with ID starting with 5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb not found: ID does not exist" containerID="5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.882637 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb"} err="failed to get container status \"5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb\": rpc error: code = NotFound desc = could not find container \"5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb\": container with ID starting with 5837fb26a732e6a7b381ee0f733bb59b54f0741bb17ae35297a90a7cc902aafb not found: ID does not exist" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.887643 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.896095 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.909761 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:53:00 crc kubenswrapper[4704]: E0122 16:53:00.910167 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" containerName="memcached" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.910193 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" containerName="memcached" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.910414 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" containerName="memcached" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.911124 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.924878 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.924978 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.925303 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-h4svk" Jan 22 16:53:00 crc kubenswrapper[4704]: I0122 16:53:00.965554 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.086913 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de39e0b8-3a6a-414d-a4db-e941e38230dd-config-data\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.086990 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39e0b8-3a6a-414d-a4db-e941e38230dd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.087042 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g64p\" (UniqueName: \"kubernetes.io/projected/de39e0b8-3a6a-414d-a4db-e941e38230dd-kube-api-access-4g64p\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.087067 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39e0b8-3a6a-414d-a4db-e941e38230dd-kolla-config\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.087092 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39e0b8-3a6a-414d-a4db-e941e38230dd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.102563 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.189033 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de39e0b8-3a6a-414d-a4db-e941e38230dd-config-data\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.189099 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39e0b8-3a6a-414d-a4db-e941e38230dd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.189147 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g64p\" (UniqueName: \"kubernetes.io/projected/de39e0b8-3a6a-414d-a4db-e941e38230dd-kube-api-access-4g64p\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.189170 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39e0b8-3a6a-414d-a4db-e941e38230dd-kolla-config\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.189196 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39e0b8-3a6a-414d-a4db-e941e38230dd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.192759 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39e0b8-3a6a-414d-a4db-e941e38230dd-memcached-tls-certs\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.193456 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de39e0b8-3a6a-414d-a4db-e941e38230dd-config-data\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.194007 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39e0b8-3a6a-414d-a4db-e941e38230dd-combined-ca-bundle\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.194193 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39e0b8-3a6a-414d-a4db-e941e38230dd-kolla-config\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.213296 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g64p\" (UniqueName: \"kubernetes.io/projected/de39e0b8-3a6a-414d-a4db-e941e38230dd-kube-api-access-4g64p\") pod \"memcached-0\" (UID: \"de39e0b8-3a6a-414d-a4db-e941e38230dd\") " pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.249889 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.290304 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-config-data\") pod \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.290374 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-combined-ca-bundle\") pod \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.290413 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-custom-prometheus-ca\") pod \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.290444 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-internal-tls-certs\") pod \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.290480 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np9cd\" (UniqueName: \"kubernetes.io/projected/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-kube-api-access-np9cd\") pod \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.290524 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-public-tls-certs\") pod \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.290607 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-logs\") pod \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\" (UID: \"aecead70-3acf-4c4c-99c0-e9ce3c8e867b\") " Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.292366 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-logs" (OuterVolumeSpecName: "logs") pod "aecead70-3acf-4c4c-99c0-e9ce3c8e867b" (UID: "aecead70-3acf-4c4c-99c0-e9ce3c8e867b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.313177 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-kube-api-access-np9cd" (OuterVolumeSpecName: "kube-api-access-np9cd") pod "aecead70-3acf-4c4c-99c0-e9ce3c8e867b" (UID: "aecead70-3acf-4c4c-99c0-e9ce3c8e867b"). InnerVolumeSpecName "kube-api-access-np9cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.319486 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "aecead70-3acf-4c4c-99c0-e9ce3c8e867b" (UID: "aecead70-3acf-4c4c-99c0-e9ce3c8e867b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.320212 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aecead70-3acf-4c4c-99c0-e9ce3c8e867b" (UID: "aecead70-3acf-4c4c-99c0-e9ce3c8e867b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.359296 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-config-data" (OuterVolumeSpecName: "config-data") pod "aecead70-3acf-4c4c-99c0-e9ce3c8e867b" (UID: "aecead70-3acf-4c4c-99c0-e9ce3c8e867b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.360622 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aecead70-3acf-4c4c-99c0-e9ce3c8e867b" (UID: "aecead70-3acf-4c4c-99c0-e9ce3c8e867b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.378199 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aecead70-3acf-4c4c-99c0-e9ce3c8e867b" (UID: "aecead70-3acf-4c4c-99c0-e9ce3c8e867b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.392497 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.392821 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.392948 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.393051 4704 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.393148 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np9cd\" (UniqueName: \"kubernetes.io/projected/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-kube-api-access-np9cd\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.393267 4704 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.393362 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aecead70-3acf-4c4c-99c0-e9ce3c8e867b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.647475 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8" path="/var/lib/kubelet/pods/9ad15c4a-ba0c-4ea1-804f-63eb7e4c96c8/volumes" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.800242 4704 generic.go:334] "Generic (PLEG): container finished" podID="f56533f4-37eb-4950-bc3c-71a536f51479" containerID="88a2bdc3ed9bbcf9acdfee37b2939486a9244275d9acd2973323817bdf59ced2" exitCode=0 Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.800585 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f56533f4-37eb-4950-bc3c-71a536f51479","Type":"ContainerDied","Data":"88a2bdc3ed9bbcf9acdfee37b2939486a9244275d9acd2973323817bdf59ced2"} Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.808082 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.813214 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.813215 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"aecead70-3acf-4c4c-99c0-e9ce3c8e867b","Type":"ContainerDied","Data":"e5e048b5aeaa8d66d4c6ea786d835462bbf6c542b6c878da7be394a5c90cec7a"} Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.813398 4704 scope.go:117] "RemoveContainer" containerID="148395564a14f81268688b367327d17a84a49e4293a9808b49e17360dc713418" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.826204 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" event={"ID":"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8","Type":"ContainerStarted","Data":"eec2e8ba32acceee20ec26951d704a25b0f7f58fdb2f5b10d7b4d32fa6e371c1"} Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.826245 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" event={"ID":"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8","Type":"ContainerStarted","Data":"6d24b03da5bc663210cb5825247551324bb00c5cf1482e3e691ab5bb4f18b7bc"} Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.854872 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" podStartSLOduration=2.8548547920000003 podStartE2EDuration="2.854854792s" podCreationTimestamp="2026-01-22 16:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:01.845445325 +0000 UTC m=+1474.489992025" watchObservedRunningTime="2026-01-22 16:53:01.854854792 +0000 UTC m=+1474.499401492" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.970104 4704 scope.go:117] "RemoveContainer" containerID="b81f06af084332242a4d7f33cee1ba0df46e60ddbeacc293c0407f06f08f214a" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.980639 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:01 crc kubenswrapper[4704]: I0122 16:53:01.997711 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.035250 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.056818 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:02 crc kubenswrapper[4704]: E0122 16:53:02.057208 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-kuttl-api-log" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.057221 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-kuttl-api-log" Jan 22 16:53:02 crc kubenswrapper[4704]: E0122 16:53:02.057236 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56533f4-37eb-4950-bc3c-71a536f51479" containerName="watcher-applier" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.057242 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56533f4-37eb-4950-bc3c-71a536f51479" containerName="watcher-applier" Jan 22 16:53:02 crc kubenswrapper[4704]: E0122 16:53:02.057260 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-api" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.057266 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-api" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.057412 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-api" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.057426 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" containerName="watcher-kuttl-api-log" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.057444 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56533f4-37eb-4950-bc3c-71a536f51479" containerName="watcher-applier" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.058451 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.061328 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.061530 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.061893 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.089310 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.117867 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsk8n\" (UniqueName: \"kubernetes.io/projected/f56533f4-37eb-4950-bc3c-71a536f51479-kube-api-access-xsk8n\") pod \"f56533f4-37eb-4950-bc3c-71a536f51479\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.118051 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-config-data\") pod \"f56533f4-37eb-4950-bc3c-71a536f51479\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.118098 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f56533f4-37eb-4950-bc3c-71a536f51479-logs\") pod \"f56533f4-37eb-4950-bc3c-71a536f51479\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.118174 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-combined-ca-bundle\") pod \"f56533f4-37eb-4950-bc3c-71a536f51479\" (UID: \"f56533f4-37eb-4950-bc3c-71a536f51479\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.119175 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f56533f4-37eb-4950-bc3c-71a536f51479-logs" (OuterVolumeSpecName: "logs") pod "f56533f4-37eb-4950-bc3c-71a536f51479" (UID: "f56533f4-37eb-4950-bc3c-71a536f51479"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.121503 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56533f4-37eb-4950-bc3c-71a536f51479-kube-api-access-xsk8n" (OuterVolumeSpecName: "kube-api-access-xsk8n") pod "f56533f4-37eb-4950-bc3c-71a536f51479" (UID: "f56533f4-37eb-4950-bc3c-71a536f51479"). InnerVolumeSpecName "kube-api-access-xsk8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.143069 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f56533f4-37eb-4950-bc3c-71a536f51479" (UID: "f56533f4-37eb-4950-bc3c-71a536f51479"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.168604 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-config-data" (OuterVolumeSpecName: "config-data") pod "f56533f4-37eb-4950-bc3c-71a536f51479" (UID: "f56533f4-37eb-4950-bc3c-71a536f51479"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.183731 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220230 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220325 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220383 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220405 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpg9\" (UniqueName: \"kubernetes.io/projected/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-kube-api-access-qwpg9\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220426 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220494 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220508 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220543 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220636 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220650 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsk8n\" (UniqueName: \"kubernetes.io/projected/f56533f4-37eb-4950-bc3c-71a536f51479-kube-api-access-xsk8n\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220661 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56533f4-37eb-4950-bc3c-71a536f51479-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.220671 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f56533f4-37eb-4950-bc3c-71a536f51479-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.321617 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-combined-ca-bundle\") pod \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.321697 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-custom-prometheus-ca\") pod \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.321730 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-logs\") pod \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.321765 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t55zk\" (UniqueName: \"kubernetes.io/projected/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-kube-api-access-t55zk\") pod \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.321787 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-config-data\") pod \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\" (UID: \"5a7da9f5-25ae-4b0e-8ece-f532ed21281d\") " Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322006 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322029 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322045 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwpg9\" (UniqueName: \"kubernetes.io/projected/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-kube-api-access-qwpg9\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322066 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322107 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322108 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-logs" (OuterVolumeSpecName: "logs") pod "5a7da9f5-25ae-4b0e-8ece-f532ed21281d" (UID: "5a7da9f5-25ae-4b0e-8ece-f532ed21281d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322123 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322249 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322334 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.322471 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.323642 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.328429 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.328441 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.329418 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.329718 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.330252 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-kube-api-access-t55zk" (OuterVolumeSpecName: "kube-api-access-t55zk") pod "5a7da9f5-25ae-4b0e-8ece-f532ed21281d" (UID: "5a7da9f5-25ae-4b0e-8ece-f532ed21281d"). InnerVolumeSpecName "kube-api-access-t55zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.330406 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.330281 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.341851 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwpg9\" (UniqueName: \"kubernetes.io/projected/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-kube-api-access-qwpg9\") pod \"watcher-kuttl-api-0\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.342805 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5a7da9f5-25ae-4b0e-8ece-f532ed21281d" (UID: "5a7da9f5-25ae-4b0e-8ece-f532ed21281d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.343030 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a7da9f5-25ae-4b0e-8ece-f532ed21281d" (UID: "5a7da9f5-25ae-4b0e-8ece-f532ed21281d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.368111 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-config-data" (OuterVolumeSpecName: "config-data") pod "5a7da9f5-25ae-4b0e-8ece-f532ed21281d" (UID: "5a7da9f5-25ae-4b0e-8ece-f532ed21281d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.381631 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.424332 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.424385 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.424398 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t55zk\" (UniqueName: \"kubernetes.io/projected/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-kube-api-access-t55zk\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:02 crc kubenswrapper[4704]: I0122 16:53:02.424442 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a7da9f5-25ae-4b0e-8ece-f532ed21281d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.011597 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"de39e0b8-3a6a-414d-a4db-e941e38230dd","Type":"ContainerStarted","Data":"df1e1550f950945a1fae11dcd8990fbea3466857dfd211091696cd62ffa5acd0"} Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.011972 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"de39e0b8-3a6a-414d-a4db-e941e38230dd","Type":"ContainerStarted","Data":"e878d7fe210f6feae70996650c9bbf4ba082ca94bd353bdef016e61ecbfcfb78"} Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.013010 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.016504 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.016506 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f56533f4-37eb-4950-bc3c-71a536f51479","Type":"ContainerDied","Data":"535aac1115f2d93b772112071e397cae842a6e5068d81b130401c9eaad3ebee7"} Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.016629 4704 scope.go:117] "RemoveContainer" containerID="88a2bdc3ed9bbcf9acdfee37b2939486a9244275d9acd2973323817bdf59ced2" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.023593 4704 generic.go:334] "Generic (PLEG): container finished" podID="5a7da9f5-25ae-4b0e-8ece-f532ed21281d" containerID="56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24" exitCode=0 Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.023677 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5a7da9f5-25ae-4b0e-8ece-f532ed21281d","Type":"ContainerDied","Data":"56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24"} Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.023703 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"5a7da9f5-25ae-4b0e-8ece-f532ed21281d","Type":"ContainerDied","Data":"ffb79b2e2bf55b37212295b559e113e3af02e012d9b7281af7158c9ee07f96c8"} Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.023748 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.028601 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.039941 4704 scope.go:117] "RemoveContainer" containerID="56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.049782 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=3.049765037 podStartE2EDuration="3.049765037s" podCreationTimestamp="2026-01-22 16:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:03.04951332 +0000 UTC m=+1475.694060020" watchObservedRunningTime="2026-01-22 16:53:03.049765037 +0000 UTC m=+1475.694311727" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.110499 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.130321 4704 scope.go:117] "RemoveContainer" containerID="56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24" Jan 22 16:53:03 crc kubenswrapper[4704]: E0122 16:53:03.130953 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24\": container with ID starting with 56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24 not found: ID does not exist" containerID="56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.130994 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24"} err="failed to get container status \"56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24\": rpc error: code = NotFound desc = could not find container \"56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24\": container with ID starting with 56bc85887627997282e9fab8dfd97dcdcda7d62a9831bd2651665a72766dfc24 not found: ID does not exist" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.144950 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.153172 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.162923 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.174330 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: E0122 16:53:03.174720 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7da9f5-25ae-4b0e-8ece-f532ed21281d" containerName="watcher-decision-engine" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.174745 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7da9f5-25ae-4b0e-8ece-f532ed21281d" containerName="watcher-decision-engine" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.174942 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7da9f5-25ae-4b0e-8ece-f532ed21281d" containerName="watcher-decision-engine" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.175464 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.177054 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.184758 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.194472 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.213962 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.218412 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.218640 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.313776 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.314326 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.314346 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.314378 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.314412 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.314434 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcl6\" (UniqueName: \"kubernetes.io/projected/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-kube-api-access-2lcl6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416077 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416125 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416151 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416194 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416243 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416275 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40ba1d1-a055-487e-b779-171ce0f656a2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416299 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416337 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lcl6\" (UniqueName: \"kubernetes.io/projected/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-kube-api-access-2lcl6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416364 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416419 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.416450 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2dh\" (UniqueName: \"kubernetes.io/projected/f40ba1d1-a055-487e-b779-171ce0f656a2-kube-api-access-8k2dh\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.420117 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.422779 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.424300 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.424343 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.424390 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.438758 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lcl6\" (UniqueName: \"kubernetes.io/projected/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-kube-api-access-2lcl6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.518060 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2dh\" (UniqueName: \"kubernetes.io/projected/f40ba1d1-a055-487e-b779-171ce0f656a2-kube-api-access-8k2dh\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.518169 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.518227 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.518254 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40ba1d1-a055-487e-b779-171ce0f656a2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.518294 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.519787 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40ba1d1-a055-487e-b779-171ce0f656a2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.521588 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.525124 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.525513 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.525649 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.539047 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2dh\" (UniqueName: \"kubernetes.io/projected/f40ba1d1-a055-487e-b779-171ce0f656a2-kube-api-access-8k2dh\") pod \"watcher-kuttl-applier-0\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.540436 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.646756 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7da9f5-25ae-4b0e-8ece-f532ed21281d" path="/var/lib/kubelet/pods/5a7da9f5-25ae-4b0e-8ece-f532ed21281d/volumes" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.648611 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecead70-3acf-4c4c-99c0-e9ce3c8e867b" path="/var/lib/kubelet/pods/aecead70-3acf-4c4c-99c0-e9ce3c8e867b/volumes" Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.649330 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56533f4-37eb-4950-bc3c-71a536f51479" path="/var/lib/kubelet/pods/f56533f4-37eb-4950-bc3c-71a536f51479/volumes" Jan 22 16:53:03 crc kubenswrapper[4704]: W0122 16:53:03.915039 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf40ba1d1_a055_487e_b779_171ce0f656a2.slice/crio-517774b35118359e325bc88493ae7a8620161d528df6e3f5b59e335d3d821d73 WatchSource:0}: Error finding container 517774b35118359e325bc88493ae7a8620161d528df6e3f5b59e335d3d821d73: Status 404 returned error can't find the container with id 517774b35118359e325bc88493ae7a8620161d528df6e3f5b59e335d3d821d73 Jan 22 16:53:03 crc kubenswrapper[4704]: I0122 16:53:03.917636 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:53:04 crc kubenswrapper[4704]: I0122 16:53:04.068940 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f40ba1d1-a055-487e-b779-171ce0f656a2","Type":"ContainerStarted","Data":"517774b35118359e325bc88493ae7a8620161d528df6e3f5b59e335d3d821d73"} Jan 22 16:53:04 crc kubenswrapper[4704]: I0122 16:53:04.074322 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8","Type":"ContainerStarted","Data":"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209"} Jan 22 16:53:04 crc kubenswrapper[4704]: I0122 16:53:04.074351 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8","Type":"ContainerStarted","Data":"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b"} Jan 22 16:53:04 crc kubenswrapper[4704]: I0122 16:53:04.074362 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8","Type":"ContainerStarted","Data":"d072166ef397d5504790f4cc7cf82122725f850dcdef5ec07dfab3eee1c3c9d3"} Jan 22 16:53:04 crc kubenswrapper[4704]: I0122 16:53:04.075577 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:04 crc kubenswrapper[4704]: I0122 16:53:04.083928 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:53:04 crc kubenswrapper[4704]: I0122 16:53:04.104646 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.104625347 podStartE2EDuration="3.104625347s" podCreationTimestamp="2026-01-22 16:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:04.104258566 +0000 UTC m=+1476.748805266" watchObservedRunningTime="2026-01-22 16:53:04.104625347 +0000 UTC m=+1476.749172047" Jan 22 16:53:05 crc kubenswrapper[4704]: I0122 16:53:05.109467 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3","Type":"ContainerStarted","Data":"36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3"} Jan 22 16:53:05 crc kubenswrapper[4704]: I0122 16:53:05.109593 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3","Type":"ContainerStarted","Data":"ba43a4c62da7ce51fa848c284b6647cd489da4fc62fefcc4c055b3ae21fcf9ae"} Jan 22 16:53:05 crc kubenswrapper[4704]: I0122 16:53:05.111910 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f40ba1d1-a055-487e-b779-171ce0f656a2","Type":"ContainerStarted","Data":"73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56"} Jan 22 16:53:05 crc kubenswrapper[4704]: I0122 16:53:05.114382 4704 generic.go:334] "Generic (PLEG): container finished" podID="cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" containerID="eec2e8ba32acceee20ec26951d704a25b0f7f58fdb2f5b10d7b4d32fa6e371c1" exitCode=0 Jan 22 16:53:05 crc kubenswrapper[4704]: I0122 16:53:05.114416 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" event={"ID":"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8","Type":"ContainerDied","Data":"eec2e8ba32acceee20ec26951d704a25b0f7f58fdb2f5b10d7b4d32fa6e371c1"} Jan 22 16:53:05 crc kubenswrapper[4704]: I0122 16:53:05.139706 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.139684551 podStartE2EDuration="2.139684551s" podCreationTimestamp="2026-01-22 16:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:05.124711595 +0000 UTC m=+1477.769258315" watchObservedRunningTime="2026-01-22 16:53:05.139684551 +0000 UTC m=+1477.784231251" Jan 22 16:53:05 crc kubenswrapper[4704]: I0122 16:53:05.170976 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.170955311 podStartE2EDuration="2.170955311s" podCreationTimestamp="2026-01-22 16:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:05.143891431 +0000 UTC m=+1477.788438151" watchObservedRunningTime="2026-01-22 16:53:05.170955311 +0000 UTC m=+1477.815502011" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.124614 4704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.464501 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.502109 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.690601 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-config-data\") pod \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.690883 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-credential-keys\") pod \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.691022 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-cert-memcached-mtls\") pod \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.691131 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-combined-ca-bundle\") pod \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.691214 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-fernet-keys\") pod \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.691301 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-scripts\") pod \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.691386 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w845x\" (UniqueName: \"kubernetes.io/projected/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-kube-api-access-w845x\") pod \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\" (UID: \"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8\") " Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.704930 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" (UID: "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.709453 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" (UID: "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.709526 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-kube-api-access-w845x" (OuterVolumeSpecName: "kube-api-access-w845x") pod "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" (UID: "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8"). InnerVolumeSpecName "kube-api-access-w845x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.714632 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-scripts" (OuterVolumeSpecName: "scripts") pod "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" (UID: "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.718351 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-config-data" (OuterVolumeSpecName: "config-data") pod "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" (UID: "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.731106 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" (UID: "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.767423 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" (UID: "cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.793596 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.793629 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w845x\" (UniqueName: \"kubernetes.io/projected/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-kube-api-access-w845x\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.793643 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.793652 4704 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.793662 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.793672 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:06 crc kubenswrapper[4704]: I0122 16:53:06.793680 4704 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:07 crc kubenswrapper[4704]: I0122 16:53:07.135852 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" Jan 22 16:53:07 crc kubenswrapper[4704]: I0122 16:53:07.137929 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-hrhmn" event={"ID":"cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8","Type":"ContainerDied","Data":"6d24b03da5bc663210cb5825247551324bb00c5cf1482e3e691ab5bb4f18b7bc"} Jan 22 16:53:07 crc kubenswrapper[4704]: I0122 16:53:07.137967 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d24b03da5bc663210cb5825247551324bb00c5cf1482e3e691ab5bb4f18b7bc" Jan 22 16:53:07 crc kubenswrapper[4704]: I0122 16:53:07.382727 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:08 crc kubenswrapper[4704]: I0122 16:53:08.541354 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.251125 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.398084 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-7b5844cd49-x5nb5"] Jan 22 16:53:11 crc kubenswrapper[4704]: E0122 16:53:11.398504 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" containerName="keystone-bootstrap" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.398530 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" containerName="keystone-bootstrap" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.398817 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" containerName="keystone-bootstrap" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.399538 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.407357 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7b5844cd49-x5nb5"] Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.470924 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-scripts\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471037 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-public-tls-certs\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471064 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-fernet-keys\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471108 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-credential-keys\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471129 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-internal-tls-certs\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471164 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-combined-ca-bundle\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471188 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-cert-memcached-mtls\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471208 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjz5\" (UniqueName: \"kubernetes.io/projected/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-kube-api-access-ljjz5\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.471224 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-config-data\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.572497 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-config-data\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573012 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-scripts\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573179 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-public-tls-certs\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573303 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-fernet-keys\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573454 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-credential-keys\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573549 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-internal-tls-certs\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573772 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-combined-ca-bundle\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573910 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-cert-memcached-mtls\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.573998 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjz5\" (UniqueName: \"kubernetes.io/projected/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-kube-api-access-ljjz5\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.578706 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-config-data\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.578706 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-scripts\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.579397 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-credential-keys\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.580280 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-internal-tls-certs\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.581143 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-public-tls-certs\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.581590 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-combined-ca-bundle\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.582147 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-fernet-keys\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.583906 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-cert-memcached-mtls\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.597836 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjz5\" (UniqueName: \"kubernetes.io/projected/c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f-kube-api-access-ljjz5\") pod \"keystone-7b5844cd49-x5nb5\" (UID: \"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f\") " pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:11 crc kubenswrapper[4704]: I0122 16:53:11.715370 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:12 crc kubenswrapper[4704]: I0122 16:53:12.177652 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7b5844cd49-x5nb5"] Jan 22 16:53:12 crc kubenswrapper[4704]: I0122 16:53:12.382394 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:12 crc kubenswrapper[4704]: I0122 16:53:12.395469 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.190526 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" event={"ID":"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f","Type":"ContainerStarted","Data":"196701dd698ced38dea4213c51fa040641e4a46e763b49ee7abdf58ff7ad3d85"} Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.190561 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" event={"ID":"c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f","Type":"ContainerStarted","Data":"3e349817e68cdae5ef9201c398dfffcd7aea6589c9b78253251f4f8a9ee5089d"} Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.190582 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.206057 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.221559 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" podStartSLOduration=2.221532833 podStartE2EDuration="2.221532833s" podCreationTimestamp="2026-01-22 16:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:13.212178267 +0000 UTC m=+1485.856724967" watchObservedRunningTime="2026-01-22 16:53:13.221532833 +0000 UTC m=+1485.866079553" Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.338903 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.525826 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.541107 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.555674 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:13 crc kubenswrapper[4704]: I0122 16:53:13.566249 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:14 crc kubenswrapper[4704]: I0122 16:53:14.197261 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:14 crc kubenswrapper[4704]: I0122 16:53:14.262017 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:53:14 crc kubenswrapper[4704]: I0122 16:53:14.277141 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:53:15 crc kubenswrapper[4704]: I0122 16:53:15.205263 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-kuttl-api-log" containerID="cri-o://a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b" gracePeriod=30 Jan 22 16:53:15 crc kubenswrapper[4704]: I0122 16:53:15.205344 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-api" containerID="cri-o://8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209" gracePeriod=30 Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.048883 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.148856 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-logs\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.148983 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-combined-ca-bundle\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.149008 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-public-tls-certs\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.149765 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-logs" (OuterVolumeSpecName: "logs") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.149939 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-cert-memcached-mtls\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.149968 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-internal-tls-certs\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.150003 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-custom-prometheus-ca\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.150067 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwpg9\" (UniqueName: \"kubernetes.io/projected/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-kube-api-access-qwpg9\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.150099 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-config-data\") pod \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\" (UID: \"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8\") " Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.150428 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.158923 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-kube-api-access-qwpg9" (OuterVolumeSpecName: "kube-api-access-qwpg9") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "kube-api-access-qwpg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.182006 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.190129 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.198803 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.199473 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.200519 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-config-data" (OuterVolumeSpecName: "config-data") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.217418 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" (UID: "9da82de5-5c6f-4658-865b-e5a8f1b7a0c8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.220409 4704 generic.go:334] "Generic (PLEG): container finished" podID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerID="8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209" exitCode=0 Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.220437 4704 generic.go:334] "Generic (PLEG): container finished" podID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerID="a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b" exitCode=143 Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.220566 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.220552 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8","Type":"ContainerDied","Data":"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209"} Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.220715 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8","Type":"ContainerDied","Data":"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b"} Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.220734 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9da82de5-5c6f-4658-865b-e5a8f1b7a0c8","Type":"ContainerDied","Data":"d072166ef397d5504790f4cc7cf82122725f850dcdef5ec07dfab3eee1c3c9d3"} Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.220753 4704 scope.go:117] "RemoveContainer" containerID="8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.251952 4704 scope.go:117] "RemoveContainer" containerID="a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.263786 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.263837 4704 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.263849 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.263857 4704 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.263868 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.263899 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwpg9\" (UniqueName: \"kubernetes.io/projected/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-kube-api-access-qwpg9\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.263909 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.274278 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.277501 4704 scope.go:117] "RemoveContainer" containerID="8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209" Jan 22 16:53:16 crc kubenswrapper[4704]: E0122 16:53:16.278920 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209\": container with ID starting with 8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209 not found: ID does not exist" containerID="8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.278968 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209"} err="failed to get container status \"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209\": rpc error: code = NotFound desc = could not find container \"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209\": container with ID starting with 8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209 not found: ID does not exist" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.278994 4704 scope.go:117] "RemoveContainer" containerID="a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b" Jan 22 16:53:16 crc kubenswrapper[4704]: E0122 16:53:16.279338 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b\": container with ID starting with a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b not found: ID does not exist" containerID="a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.279383 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b"} err="failed to get container status \"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b\": rpc error: code = NotFound desc = could not find container \"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b\": container with ID starting with a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b not found: ID does not exist" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.279404 4704 scope.go:117] "RemoveContainer" containerID="8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.279675 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209"} err="failed to get container status \"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209\": rpc error: code = NotFound desc = could not find container \"8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209\": container with ID starting with 8c7e8f05486fafb1cabc9ab0b33cb5314ed2d2df75f1d7b7a720c65f37e51209 not found: ID does not exist" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.279781 4704 scope.go:117] "RemoveContainer" containerID="a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.280476 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b"} err="failed to get container status \"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b\": rpc error: code = NotFound desc = could not find container \"a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b\": container with ID starting with a73ee6f178ea57cd5f67cc0c598228e5888a572f679993d28d78d9db08ec499b not found: ID does not exist" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.282541 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.334922 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:16 crc kubenswrapper[4704]: E0122 16:53:16.335247 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-api" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.335262 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-api" Jan 22 16:53:16 crc kubenswrapper[4704]: E0122 16:53:16.335294 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-kuttl-api-log" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.335300 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-kuttl-api-log" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.335459 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-api" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.335477 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" containerName="watcher-kuttl-api-log" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.336283 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.338713 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.364122 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.466449 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.466497 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.466565 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.466774 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.466919 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8tw4\" (UniqueName: \"kubernetes.io/projected/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-kube-api-access-t8tw4\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.467014 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.569086 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.569146 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.569167 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.569232 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.569272 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.569323 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8tw4\" (UniqueName: \"kubernetes.io/projected/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-kube-api-access-t8tw4\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.569600 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.573150 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.574228 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.574832 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.576120 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.585575 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8tw4\" (UniqueName: \"kubernetes.io/projected/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-kube-api-access-t8tw4\") pod \"watcher-kuttl-api-0\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:16 crc kubenswrapper[4704]: I0122 16:53:16.691845 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:17 crc kubenswrapper[4704]: I0122 16:53:17.145654 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:53:17 crc kubenswrapper[4704]: W0122 16:53:17.154820 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f34bccd_0df7_49d2_a8ff_7a4e8b345ef4.slice/crio-4e61909ee1f32c6b77b152280c100e2f81dcb5e3872a1b2d0462b23d28f668c0 WatchSource:0}: Error finding container 4e61909ee1f32c6b77b152280c100e2f81dcb5e3872a1b2d0462b23d28f668c0: Status 404 returned error can't find the container with id 4e61909ee1f32c6b77b152280c100e2f81dcb5e3872a1b2d0462b23d28f668c0 Jan 22 16:53:17 crc kubenswrapper[4704]: I0122 16:53:17.229519 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4","Type":"ContainerStarted","Data":"4e61909ee1f32c6b77b152280c100e2f81dcb5e3872a1b2d0462b23d28f668c0"} Jan 22 16:53:17 crc kubenswrapper[4704]: I0122 16:53:17.651208 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da82de5-5c6f-4658-865b-e5a8f1b7a0c8" path="/var/lib/kubelet/pods/9da82de5-5c6f-4658-865b-e5a8f1b7a0c8/volumes" Jan 22 16:53:18 crc kubenswrapper[4704]: I0122 16:53:18.239256 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4","Type":"ContainerStarted","Data":"6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf"} Jan 22 16:53:18 crc kubenswrapper[4704]: I0122 16:53:18.239675 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4","Type":"ContainerStarted","Data":"31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e"} Jan 22 16:53:18 crc kubenswrapper[4704]: I0122 16:53:18.239697 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:18 crc kubenswrapper[4704]: I0122 16:53:18.260545 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.260524112 podStartE2EDuration="2.260524112s" podCreationTimestamp="2026-01-22 16:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:53:18.258164345 +0000 UTC m=+1490.902711045" watchObservedRunningTime="2026-01-22 16:53:18.260524112 +0000 UTC m=+1490.905070812" Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.086494 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.086547 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.086584 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.087279 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.087328 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" gracePeriod=600 Jan 22 16:53:19 crc kubenswrapper[4704]: E0122 16:53:19.233784 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.250009 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" exitCode=0 Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.251021 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435"} Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.251055 4704 scope.go:117] "RemoveContainer" containerID="33c05c7b04e52a99d7618873c0e8cfbae6126223bfd8e14eabf1b1f805e4a907" Jan 22 16:53:19 crc kubenswrapper[4704]: I0122 16:53:19.251359 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:53:19 crc kubenswrapper[4704]: E0122 16:53:19.251664 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:53:20 crc kubenswrapper[4704]: I0122 16:53:20.389831 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:21 crc kubenswrapper[4704]: I0122 16:53:21.692280 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:22 crc kubenswrapper[4704]: I0122 16:53:22.207421 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:26 crc kubenswrapper[4704]: I0122 16:53:26.692902 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:26 crc kubenswrapper[4704]: I0122 16:53:26.698703 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:27 crc kubenswrapper[4704]: I0122 16:53:27.328248 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:53:34 crc kubenswrapper[4704]: I0122 16:53:34.637125 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:53:34 crc kubenswrapper[4704]: E0122 16:53:34.637907 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:53:35 crc kubenswrapper[4704]: I0122 16:53:35.679677 4704 scope.go:117] "RemoveContainer" containerID="dc34df274eea3d1e12e1ea912600a7999733a15df561ad7567d238f7251337f9" Jan 22 16:53:43 crc kubenswrapper[4704]: I0122 16:53:43.371490 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-7b5844cd49-x5nb5" Jan 22 16:53:43 crc kubenswrapper[4704]: I0122 16:53:43.437402 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7747c9fb6-l9n4v"] Jan 22 16:53:43 crc kubenswrapper[4704]: I0122 16:53:43.437623 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" podUID="2c642fb5-a73d-47db-8dc4-dcb7c13c876d" containerName="keystone-api" containerID="cri-o://057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde" gracePeriod=30 Jan 22 16:53:46 crc kubenswrapper[4704]: I0122 16:53:46.985458 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.113652 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-internal-tls-certs\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.113730 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-combined-ca-bundle\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.113816 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-scripts\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.114527 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-config-data\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.114606 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-public-tls-certs\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.114650 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-credential-keys\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.114673 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g9bp\" (UniqueName: \"kubernetes.io/projected/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-kube-api-access-2g9bp\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.114713 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-fernet-keys\") pod \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\" (UID: \"2c642fb5-a73d-47db-8dc4-dcb7c13c876d\") " Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.119429 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-scripts" (OuterVolumeSpecName: "scripts") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.120197 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.127997 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.128757 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-kube-api-access-2g9bp" (OuterVolumeSpecName: "kube-api-access-2g9bp") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "kube-api-access-2g9bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.158201 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-config-data" (OuterVolumeSpecName: "config-data") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.161133 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.163182 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.169907 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c642fb5-a73d-47db-8dc4-dcb7c13c876d" (UID: "2c642fb5-a73d-47db-8dc4-dcb7c13c876d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215459 4704 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215492 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215501 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215509 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215520 4704 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215529 4704 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215538 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g9bp\" (UniqueName: \"kubernetes.io/projected/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-kube-api-access-2g9bp\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.215548 4704 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c642fb5-a73d-47db-8dc4-dcb7c13c876d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.487038 4704 generic.go:334] "Generic (PLEG): container finished" podID="2c642fb5-a73d-47db-8dc4-dcb7c13c876d" containerID="057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde" exitCode=0 Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.487104 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.487104 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" event={"ID":"2c642fb5-a73d-47db-8dc4-dcb7c13c876d","Type":"ContainerDied","Data":"057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde"} Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.487347 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7747c9fb6-l9n4v" event={"ID":"2c642fb5-a73d-47db-8dc4-dcb7c13c876d","Type":"ContainerDied","Data":"877ba6166b298fb5a28ed4d7eaea6a0199af922492513723c3a6d864d4653709"} Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.487412 4704 scope.go:117] "RemoveContainer" containerID="057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.516716 4704 scope.go:117] "RemoveContainer" containerID="057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde" Jan 22 16:53:47 crc kubenswrapper[4704]: E0122 16:53:47.517321 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde\": container with ID starting with 057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde not found: ID does not exist" containerID="057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.517377 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde"} err="failed to get container status \"057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde\": rpc error: code = NotFound desc = could not find container \"057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde\": container with ID starting with 057a6aa1ba004e014bb439a395175b858d27845552c30cb1829192bbe9fb3cde not found: ID does not exist" Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.525645 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7747c9fb6-l9n4v"] Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.532911 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-7747c9fb6-l9n4v"] Jan 22 16:53:47 crc kubenswrapper[4704]: I0122 16:53:47.645664 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c642fb5-a73d-47db-8dc4-dcb7c13c876d" path="/var/lib/kubelet/pods/2c642fb5-a73d-47db-8dc4-dcb7c13c876d/volumes" Jan 22 16:53:49 crc kubenswrapper[4704]: I0122 16:53:49.634257 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:53:49 crc kubenswrapper[4704]: E0122 16:53:49.635127 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:53:50 crc kubenswrapper[4704]: I0122 16:53:50.949865 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:53:50 crc kubenswrapper[4704]: I0122 16:53:50.951004 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-central-agent" containerID="cri-o://a975340f59dcdf6686e7248c8922d9e110c0c249823a7a0f35da568eef0316ec" gracePeriod=30 Jan 22 16:53:50 crc kubenswrapper[4704]: I0122 16:53:50.952835 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="proxy-httpd" containerID="cri-o://170065711be2555112f048dba1c4d9a5d83587ab8b1125ad13a2f25ed378fc89" gracePeriod=30 Jan 22 16:53:50 crc kubenswrapper[4704]: I0122 16:53:50.952872 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-notification-agent" containerID="cri-o://0e96c5b269ebf00d63a9c5928e623bd406f2d73f606ab848f61e69986da9d2b3" gracePeriod=30 Jan 22 16:53:50 crc kubenswrapper[4704]: I0122 16:53:50.953027 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="sg-core" containerID="cri-o://145bd0203c50b353e17627436ffe57403e24272c83d58a422f24df2b32cdbafd" gracePeriod=30 Jan 22 16:53:51 crc kubenswrapper[4704]: E0122 16:53:51.453752 4704 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc68aa7a9_44fe_4b4e_9d75_ed820d48f4c6.slice/crio-conmon-a975340f59dcdf6686e7248c8922d9e110c0c249823a7a0f35da568eef0316ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc68aa7a9_44fe_4b4e_9d75_ed820d48f4c6.slice/crio-a975340f59dcdf6686e7248c8922d9e110c0c249823a7a0f35da568eef0316ec.scope\": RecentStats: unable to find data in memory cache]" Jan 22 16:53:51 crc kubenswrapper[4704]: I0122 16:53:51.517343 4704 generic.go:334] "Generic (PLEG): container finished" podID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerID="170065711be2555112f048dba1c4d9a5d83587ab8b1125ad13a2f25ed378fc89" exitCode=0 Jan 22 16:53:51 crc kubenswrapper[4704]: I0122 16:53:51.517396 4704 generic.go:334] "Generic (PLEG): container finished" podID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerID="145bd0203c50b353e17627436ffe57403e24272c83d58a422f24df2b32cdbafd" exitCode=2 Jan 22 16:53:51 crc kubenswrapper[4704]: I0122 16:53:51.517406 4704 generic.go:334] "Generic (PLEG): container finished" podID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerID="a975340f59dcdf6686e7248c8922d9e110c0c249823a7a0f35da568eef0316ec" exitCode=0 Jan 22 16:53:51 crc kubenswrapper[4704]: I0122 16:53:51.517409 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerDied","Data":"170065711be2555112f048dba1c4d9a5d83587ab8b1125ad13a2f25ed378fc89"} Jan 22 16:53:51 crc kubenswrapper[4704]: I0122 16:53:51.517442 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerDied","Data":"145bd0203c50b353e17627436ffe57403e24272c83d58a422f24df2b32cdbafd"} Jan 22 16:53:51 crc kubenswrapper[4704]: I0122 16:53:51.517454 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerDied","Data":"a975340f59dcdf6686e7248c8922d9e110c0c249823a7a0f35da568eef0316ec"} Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.057969 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.154:3000/\": dial tcp 10.217.0.154:3000: connect: connection refused" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.530515 4704 generic.go:334] "Generic (PLEG): container finished" podID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerID="0e96c5b269ebf00d63a9c5928e623bd406f2d73f606ab848f61e69986da9d2b3" exitCode=0 Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.530568 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerDied","Data":"0e96c5b269ebf00d63a9c5928e623bd406f2d73f606ab848f61e69986da9d2b3"} Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.744626 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819394 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-882hv\" (UniqueName: \"kubernetes.io/projected/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-kube-api-access-882hv\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819450 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-log-httpd\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819516 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-config-data\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819611 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-combined-ca-bundle\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819635 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-sg-core-conf-yaml\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819692 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-ceilometer-tls-certs\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819713 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-scripts\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.819738 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-run-httpd\") pod \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\" (UID: \"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6\") " Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.820375 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.823144 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.833883 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-kube-api-access-882hv" (OuterVolumeSpecName: "kube-api-access-882hv") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "kube-api-access-882hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.839187 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-scripts" (OuterVolumeSpecName: "scripts") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.868894 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.902847 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.909404 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.921254 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.921290 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.921300 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.921310 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.921320 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.921330 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-882hv\" (UniqueName: \"kubernetes.io/projected/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-kube-api-access-882hv\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.921340 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:52 crc kubenswrapper[4704]: I0122 16:53:52.955050 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-config-data" (OuterVolumeSpecName: "config-data") pod "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" (UID: "c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.022513 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.539884 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6","Type":"ContainerDied","Data":"b401c0d0bad888225303013abf62447a1e6895c013c457ddc3c656dd7512266b"} Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.540228 4704 scope.go:117] "RemoveContainer" containerID="170065711be2555112f048dba1c4d9a5d83587ab8b1125ad13a2f25ed378fc89" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.540353 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.577045 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.582032 4704 scope.go:117] "RemoveContainer" containerID="145bd0203c50b353e17627436ffe57403e24272c83d58a422f24df2b32cdbafd" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.583495 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.604035 4704 scope.go:117] "RemoveContainer" containerID="0e96c5b269ebf00d63a9c5928e623bd406f2d73f606ab848f61e69986da9d2b3" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.619683 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:53:53 crc kubenswrapper[4704]: E0122 16:53:53.620188 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="sg-core" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.620254 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="sg-core" Jan 22 16:53:53 crc kubenswrapper[4704]: E0122 16:53:53.620311 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="proxy-httpd" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.620395 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="proxy-httpd" Jan 22 16:53:53 crc kubenswrapper[4704]: E0122 16:53:53.620458 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-central-agent" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.620518 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-central-agent" Jan 22 16:53:53 crc kubenswrapper[4704]: E0122 16:53:53.620571 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-notification-agent" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.620619 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-notification-agent" Jan 22 16:53:53 crc kubenswrapper[4704]: E0122 16:53:53.620670 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c642fb5-a73d-47db-8dc4-dcb7c13c876d" containerName="keystone-api" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.620722 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c642fb5-a73d-47db-8dc4-dcb7c13c876d" containerName="keystone-api" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.620928 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c642fb5-a73d-47db-8dc4-dcb7c13c876d" containerName="keystone-api" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.621000 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="sg-core" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.621059 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="proxy-httpd" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.621127 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-central-agent" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.621182 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" containerName="ceilometer-notification-agent" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.622725 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.628165 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.629212 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.629388 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.632881 4704 scope.go:117] "RemoveContainer" containerID="a975340f59dcdf6686e7248c8922d9e110c0c249823a7a0f35da568eef0316ec" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.647134 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6" path="/var/lib/kubelet/pods/c68aa7a9-44fe-4b4e-9d75-ed820d48f4c6/volumes" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.647801 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743506 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcg6\" (UniqueName: \"kubernetes.io/projected/c75c5f08-663e-45b0-a9f3-4e43a1893fde-kube-api-access-wzcg6\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743573 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-log-httpd\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743599 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743634 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-run-httpd\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743652 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-scripts\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743671 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-config-data\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743714 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.743742 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.845727 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-log-httpd\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.845793 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.845863 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-run-httpd\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.845890 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-scripts\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.845912 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-config-data\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.845963 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.846002 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.846059 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcg6\" (UniqueName: \"kubernetes.io/projected/c75c5f08-663e-45b0-a9f3-4e43a1893fde-kube-api-access-wzcg6\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.850476 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-log-httpd\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.850535 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-run-httpd\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.853361 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.861622 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.866589 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-config-data\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.875473 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.877557 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcg6\" (UniqueName: \"kubernetes.io/projected/c75c5f08-663e-45b0-a9f3-4e43a1893fde-kube-api-access-wzcg6\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:53 crc kubenswrapper[4704]: I0122 16:53:53.879978 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-scripts\") pod \"ceilometer-0\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:54 crc kubenswrapper[4704]: I0122 16:53:54.004227 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:54 crc kubenswrapper[4704]: I0122 16:53:54.631759 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:53:55 crc kubenswrapper[4704]: I0122 16:53:55.558879 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerStarted","Data":"9f28656b1463a710817e11346a593846b4b643acdca54aa3bfde8a4cba29009a"} Jan 22 16:53:55 crc kubenswrapper[4704]: I0122 16:53:55.559250 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerStarted","Data":"65ee2ec19d0d262e6cd249912382bf7330918e75808b8fd949c2009f40ffbb56"} Jan 22 16:53:56 crc kubenswrapper[4704]: I0122 16:53:56.568177 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerStarted","Data":"3c4de701a0baa1111fd36d0989bda4ccd041b8ff70804e682f4c5be1b3b9c83d"} Jan 22 16:53:57 crc kubenswrapper[4704]: I0122 16:53:57.589022 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerStarted","Data":"e16954d834f6643b39ce1da201ffcdfd2715d34294dc39d616e74a897c5f1d3d"} Jan 22 16:53:59 crc kubenswrapper[4704]: I0122 16:53:59.609869 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerStarted","Data":"411e33787a0bcb79169e5cc0ea57f3f700d88e327b2c46b8863070b13928892b"} Jan 22 16:53:59 crc kubenswrapper[4704]: I0122 16:53:59.610403 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:53:59 crc kubenswrapper[4704]: I0122 16:53:59.633221 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.683917938 podStartE2EDuration="6.633204546s" podCreationTimestamp="2026-01-22 16:53:53 +0000 UTC" firstStartedPulling="2026-01-22 16:53:54.630757527 +0000 UTC m=+1527.275304227" lastFinishedPulling="2026-01-22 16:53:58.580044135 +0000 UTC m=+1531.224590835" observedRunningTime="2026-01-22 16:53:59.627220176 +0000 UTC m=+1532.271766876" watchObservedRunningTime="2026-01-22 16:53:59.633204546 +0000 UTC m=+1532.277751246" Jan 22 16:54:03 crc kubenswrapper[4704]: I0122 16:54:03.637742 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:54:03 crc kubenswrapper[4704]: E0122 16:54:03.638197 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:54:14 crc kubenswrapper[4704]: I0122 16:54:14.635032 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:54:14 crc kubenswrapper[4704]: E0122 16:54:14.635998 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:54:24 crc kubenswrapper[4704]: I0122 16:54:24.019740 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:25 crc kubenswrapper[4704]: I0122 16:54:25.634280 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:54:25 crc kubenswrapper[4704]: E0122 16:54:25.634517 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:54:26 crc kubenswrapper[4704]: I0122 16:54:26.917172 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w"] Jan 22 16:54:26 crc kubenswrapper[4704]: I0122 16:54:26.924310 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-f2t5w"] Jan 22 16:54:26 crc kubenswrapper[4704]: I0122 16:54:26.971418 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:54:26 crc kubenswrapper[4704]: I0122 16:54:26.971705 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" containerName="watcher-decision-engine" containerID="cri-o://36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3" gracePeriod=30 Jan 22 16:54:26 crc kubenswrapper[4704]: I0122 16:54:26.980482 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchercf37-account-delete-9b88p"] Jan 22 16:54:26 crc kubenswrapper[4704]: I0122 16:54:26.981467 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:26 crc kubenswrapper[4704]: I0122 16:54:26.999596 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchercf37-account-delete-9b88p"] Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.062085 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.062389 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-kuttl-api-log" containerID="cri-o://31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e" gracePeriod=30 Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.062842 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-api" containerID="cri-o://6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf" gracePeriod=30 Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.070400 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.070602 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f40ba1d1-a055-487e-b779-171ce0f656a2" containerName="watcher-applier" containerID="cri-o://73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" gracePeriod=30 Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.134446 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9hx\" (UniqueName: \"kubernetes.io/projected/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-kube-api-access-5q9hx\") pod \"watchercf37-account-delete-9b88p\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.134511 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-operator-scripts\") pod \"watchercf37-account-delete-9b88p\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.235584 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9hx\" (UniqueName: \"kubernetes.io/projected/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-kube-api-access-5q9hx\") pod \"watchercf37-account-delete-9b88p\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.235656 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-operator-scripts\") pod \"watchercf37-account-delete-9b88p\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.236324 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-operator-scripts\") pod \"watchercf37-account-delete-9b88p\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.262488 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9hx\" (UniqueName: \"kubernetes.io/projected/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-kube-api-access-5q9hx\") pod \"watchercf37-account-delete-9b88p\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.302492 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.659000 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0b1107-1d1c-4907-b3ab-e4121d83335f" path="/var/lib/kubelet/pods/5b0b1107-1d1c-4907-b3ab-e4121d83335f/volumes" Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.827170 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchercf37-account-delete-9b88p"] Jan 22 16:54:27 crc kubenswrapper[4704]: W0122 16:54:27.837935 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9bb3f29_5eb9_483a_b3e0_bf51383ffd0d.slice/crio-9cd0e343c323df3e849f26ecc3b606cc1711769f27d2414e008739d0d6a6efe4 WatchSource:0}: Error finding container 9cd0e343c323df3e849f26ecc3b606cc1711769f27d2414e008739d0d6a6efe4: Status 404 returned error can't find the container with id 9cd0e343c323df3e849f26ecc3b606cc1711769f27d2414e008739d0d6a6efe4 Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.885145 4704 generic.go:334] "Generic (PLEG): container finished" podID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerID="31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e" exitCode=143 Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.885241 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4","Type":"ContainerDied","Data":"31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e"} Jan 22 16:54:27 crc kubenswrapper[4704]: I0122 16:54:27.891005 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" event={"ID":"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d","Type":"ContainerStarted","Data":"9cd0e343c323df3e849f26ecc3b606cc1711769f27d2414e008739d0d6a6efe4"} Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.397224 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:54:28 crc kubenswrapper[4704]: E0122 16:54:28.545953 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:54:28 crc kubenswrapper[4704]: E0122 16:54:28.547413 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:54:28 crc kubenswrapper[4704]: E0122 16:54:28.561866 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 16:54:28 crc kubenswrapper[4704]: E0122 16:54:28.561932 4704 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f40ba1d1-a055-487e-b779-171ce0f656a2" containerName="watcher-applier" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.562683 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-logs\") pod \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.562743 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-custom-prometheus-ca\") pod \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.562868 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-cert-memcached-mtls\") pod \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.562950 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8tw4\" (UniqueName: \"kubernetes.io/projected/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-kube-api-access-t8tw4\") pod \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.562998 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-config-data\") pod \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.563043 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-combined-ca-bundle\") pod \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\" (UID: \"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4\") " Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.563435 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-logs" (OuterVolumeSpecName: "logs") pod "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" (UID: "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.591648 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-kube-api-access-t8tw4" (OuterVolumeSpecName: "kube-api-access-t8tw4") pod "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" (UID: "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4"). InnerVolumeSpecName "kube-api-access-t8tw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.604014 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" (UID: "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.615393 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" (UID: "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.627938 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-config-data" (OuterVolumeSpecName: "config-data") pod "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" (UID: "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.659326 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" (UID: "1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.664818 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.664868 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.664885 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.664898 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8tw4\" (UniqueName: \"kubernetes.io/projected/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-kube-api-access-t8tw4\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.664910 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.664922 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.904378 4704 generic.go:334] "Generic (PLEG): container finished" podID="c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d" containerID="b8cfc689aadb7eb7dd0b663b4cc8e963f6354138302d758c5c396ca6e30a0497" exitCode=0 Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.904459 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" event={"ID":"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d","Type":"ContainerDied","Data":"b8cfc689aadb7eb7dd0b663b4cc8e963f6354138302d758c5c396ca6e30a0497"} Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.907026 4704 generic.go:334] "Generic (PLEG): container finished" podID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerID="6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf" exitCode=0 Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.907059 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4","Type":"ContainerDied","Data":"6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf"} Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.907079 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4","Type":"ContainerDied","Data":"4e61909ee1f32c6b77b152280c100e2f81dcb5e3872a1b2d0462b23d28f668c0"} Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.907101 4704 scope.go:117] "RemoveContainer" containerID="6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.907204 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.944438 4704 scope.go:117] "RemoveContainer" containerID="31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.956100 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.963726 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.967197 4704 scope.go:117] "RemoveContainer" containerID="6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf" Jan 22 16:54:28 crc kubenswrapper[4704]: E0122 16:54:28.969265 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf\": container with ID starting with 6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf not found: ID does not exist" containerID="6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.969302 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf"} err="failed to get container status \"6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf\": rpc error: code = NotFound desc = could not find container \"6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf\": container with ID starting with 6b4d859ff6f1ce6403be0da6dd5d1a7813ba200ec62379d2e83c19153301ebdf not found: ID does not exist" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.969327 4704 scope.go:117] "RemoveContainer" containerID="31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e" Jan 22 16:54:28 crc kubenswrapper[4704]: E0122 16:54:28.971271 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e\": container with ID starting with 31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e not found: ID does not exist" containerID="31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e" Jan 22 16:54:28 crc kubenswrapper[4704]: I0122 16:54:28.971313 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e"} err="failed to get container status \"31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e\": rpc error: code = NotFound desc = could not find container \"31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e\": container with ID starting with 31fd148565379d553e61d0def99269466c2a5504ee84810d9ae26c529436624e not found: ID does not exist" Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.644844 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" path="/var/lib/kubelet/pods/1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4/volumes" Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.747649 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.748478 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-central-agent" containerID="cri-o://9f28656b1463a710817e11346a593846b4b643acdca54aa3bfde8a4cba29009a" gracePeriod=30 Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.748691 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="proxy-httpd" containerID="cri-o://411e33787a0bcb79169e5cc0ea57f3f700d88e327b2c46b8863070b13928892b" gracePeriod=30 Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.748752 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="sg-core" containerID="cri-o://e16954d834f6643b39ce1da201ffcdfd2715d34294dc39d616e74a897c5f1d3d" gracePeriod=30 Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.748707 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-notification-agent" containerID="cri-o://3c4de701a0baa1111fd36d0989bda4ccd041b8ff70804e682f4c5be1b3b9c83d" gracePeriod=30 Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.918925 4704 generic.go:334] "Generic (PLEG): container finished" podID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerID="e16954d834f6643b39ce1da201ffcdfd2715d34294dc39d616e74a897c5f1d3d" exitCode=2 Jan 22 16:54:29 crc kubenswrapper[4704]: I0122 16:54:29.918993 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerDied","Data":"e16954d834f6643b39ce1da201ffcdfd2715d34294dc39d616e74a897c5f1d3d"} Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.338392 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.498278 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q9hx\" (UniqueName: \"kubernetes.io/projected/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-kube-api-access-5q9hx\") pod \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.498415 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-operator-scripts\") pod \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\" (UID: \"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.499295 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d" (UID: "c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.504706 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-kube-api-access-5q9hx" (OuterVolumeSpecName: "kube-api-access-5q9hx") pod "c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d" (UID: "c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d"). InnerVolumeSpecName "kube-api-access-5q9hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.600205 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.600253 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q9hx\" (UniqueName: \"kubernetes.io/projected/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d-kube-api-access-5q9hx\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.644256 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.672682 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r2tck"] Jan 22 16:54:30 crc kubenswrapper[4704]: E0122 16:54:30.673642 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-api" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.673660 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-api" Jan 22 16:54:30 crc kubenswrapper[4704]: E0122 16:54:30.673688 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-kuttl-api-log" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.673696 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-kuttl-api-log" Jan 22 16:54:30 crc kubenswrapper[4704]: E0122 16:54:30.673718 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d" containerName="mariadb-account-delete" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.673726 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d" containerName="mariadb-account-delete" Jan 22 16:54:30 crc kubenswrapper[4704]: E0122 16:54:30.673741 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" containerName="watcher-decision-engine" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.673749 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" containerName="watcher-decision-engine" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.673961 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" containerName="watcher-decision-engine" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.673981 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-api" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.673998 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f34bccd-0df7-49d2-a8ff-7a4e8b345ef4" containerName="watcher-kuttl-api-log" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.674014 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d" containerName="mariadb-account-delete" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.675403 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.693778 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2tck"] Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802305 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-combined-ca-bundle\") pod \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802396 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-logs\") pod \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802418 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-custom-prometheus-ca\") pod \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802450 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-config-data\") pod \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802537 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-cert-memcached-mtls\") pod \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802606 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lcl6\" (UniqueName: \"kubernetes.io/projected/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-kube-api-access-2lcl6\") pod \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\" (UID: \"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3\") " Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802954 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-utilities\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.802990 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-logs" (OuterVolumeSpecName: "logs") pod "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" (UID: "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.803127 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-catalog-content\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.803204 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqrtv\" (UniqueName: \"kubernetes.io/projected/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-kube-api-access-nqrtv\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.803412 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.805531 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-kube-api-access-2lcl6" (OuterVolumeSpecName: "kube-api-access-2lcl6") pod "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" (UID: "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3"). InnerVolumeSpecName "kube-api-access-2lcl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.826501 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" (UID: "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.828181 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" (UID: "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.860011 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-config-data" (OuterVolumeSpecName: "config-data") pod "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" (UID: "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.874486 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" (UID: "e1ef5b8e-953f-404f-ba1a-e91a5ef51be3"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904428 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-catalog-content\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904485 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqrtv\" (UniqueName: \"kubernetes.io/projected/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-kube-api-access-nqrtv\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904592 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-utilities\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904673 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904688 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lcl6\" (UniqueName: \"kubernetes.io/projected/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-kube-api-access-2lcl6\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904701 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904713 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.904725 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.905050 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-catalog-content\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.905192 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-utilities\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.921539 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqrtv\" (UniqueName: \"kubernetes.io/projected/9b1e8af3-da39-42d0-bc3e-5be66c218bfe-kube-api-access-nqrtv\") pod \"certified-operators-r2tck\" (UID: \"9b1e8af3-da39-42d0-bc3e-5be66c218bfe\") " pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.926942 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.926788 4704 generic.go:334] "Generic (PLEG): container finished" podID="e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" containerID="36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3" exitCode=0 Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.926941 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3","Type":"ContainerDied","Data":"36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3"} Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.927152 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e1ef5b8e-953f-404f-ba1a-e91a5ef51be3","Type":"ContainerDied","Data":"ba43a4c62da7ce51fa848c284b6647cd489da4fc62fefcc4c055b3ae21fcf9ae"} Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.927188 4704 scope.go:117] "RemoveContainer" containerID="36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.935203 4704 generic.go:334] "Generic (PLEG): container finished" podID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerID="411e33787a0bcb79169e5cc0ea57f3f700d88e327b2c46b8863070b13928892b" exitCode=0 Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.935234 4704 generic.go:334] "Generic (PLEG): container finished" podID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerID="9f28656b1463a710817e11346a593846b4b643acdca54aa3bfde8a4cba29009a" exitCode=0 Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.935297 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerDied","Data":"411e33787a0bcb79169e5cc0ea57f3f700d88e327b2c46b8863070b13928892b"} Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.935323 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerDied","Data":"9f28656b1463a710817e11346a593846b4b643acdca54aa3bfde8a4cba29009a"} Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.936750 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" event={"ID":"c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d","Type":"ContainerDied","Data":"9cd0e343c323df3e849f26ecc3b606cc1711769f27d2414e008739d0d6a6efe4"} Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.936775 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd0e343c323df3e849f26ecc3b606cc1711769f27d2414e008739d0d6a6efe4" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.936840 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchercf37-account-delete-9b88p" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.948618 4704 scope.go:117] "RemoveContainer" containerID="36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3" Jan 22 16:54:30 crc kubenswrapper[4704]: E0122 16:54:30.959963 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3\": container with ID starting with 36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3 not found: ID does not exist" containerID="36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.960017 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3"} err="failed to get container status \"36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3\": rpc error: code = NotFound desc = could not find container \"36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3\": container with ID starting with 36f66f824b14bacac0099448f7235cef2370ce208599b5927ce6fbc67699c4d3 not found: ID does not exist" Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.975052 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:54:30 crc kubenswrapper[4704]: I0122 16:54:30.983008 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:54:31 crc kubenswrapper[4704]: I0122 16:54:31.000905 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:31 crc kubenswrapper[4704]: I0122 16:54:31.528940 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2tck"] Jan 22 16:54:31 crc kubenswrapper[4704]: I0122 16:54:31.643956 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ef5b8e-953f-404f-ba1a-e91a5ef51be3" path="/var/lib/kubelet/pods/e1ef5b8e-953f-404f-ba1a-e91a5ef51be3/volumes" Jan 22 16:54:31 crc kubenswrapper[4704]: I0122 16:54:31.946786 4704 generic.go:334] "Generic (PLEG): container finished" podID="9b1e8af3-da39-42d0-bc3e-5be66c218bfe" containerID="0a0432378ef69ca1ec2e447c545d26127ca4d79402bf3e32b289fb6d128f39fb" exitCode=0 Jan 22 16:54:31 crc kubenswrapper[4704]: I0122 16:54:31.946909 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2tck" event={"ID":"9b1e8af3-da39-42d0-bc3e-5be66c218bfe","Type":"ContainerDied","Data":"0a0432378ef69ca1ec2e447c545d26127ca4d79402bf3e32b289fb6d128f39fb"} Jan 22 16:54:31 crc kubenswrapper[4704]: I0122 16:54:31.947765 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2tck" event={"ID":"9b1e8af3-da39-42d0-bc3e-5be66c218bfe","Type":"ContainerStarted","Data":"8dafdae23b554ad432679fed1c6c96facff80456e7fe677afaaef15822952f81"} Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.006607 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-qxd4w"] Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.016339 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-qxd4w"] Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.027778 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchercf37-account-delete-9b88p"] Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.034071 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw"] Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.039346 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-cf37-account-create-update-4xlqw"] Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.044682 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchercf37-account-delete-9b88p"] Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.670287 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.731093 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40ba1d1-a055-487e-b779-171ce0f656a2-logs\") pod \"f40ba1d1-a055-487e-b779-171ce0f656a2\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.731176 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-combined-ca-bundle\") pod \"f40ba1d1-a055-487e-b779-171ce0f656a2\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.731260 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-config-data\") pod \"f40ba1d1-a055-487e-b779-171ce0f656a2\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.731337 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-cert-memcached-mtls\") pod \"f40ba1d1-a055-487e-b779-171ce0f656a2\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.731436 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k2dh\" (UniqueName: \"kubernetes.io/projected/f40ba1d1-a055-487e-b779-171ce0f656a2-kube-api-access-8k2dh\") pod \"f40ba1d1-a055-487e-b779-171ce0f656a2\" (UID: \"f40ba1d1-a055-487e-b779-171ce0f656a2\") " Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.731678 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f40ba1d1-a055-487e-b779-171ce0f656a2-logs" (OuterVolumeSpecName: "logs") pod "f40ba1d1-a055-487e-b779-171ce0f656a2" (UID: "f40ba1d1-a055-487e-b779-171ce0f656a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.733084 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40ba1d1-a055-487e-b779-171ce0f656a2-logs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.750787 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40ba1d1-a055-487e-b779-171ce0f656a2-kube-api-access-8k2dh" (OuterVolumeSpecName: "kube-api-access-8k2dh") pod "f40ba1d1-a055-487e-b779-171ce0f656a2" (UID: "f40ba1d1-a055-487e-b779-171ce0f656a2"). InnerVolumeSpecName "kube-api-access-8k2dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.815665 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f40ba1d1-a055-487e-b779-171ce0f656a2" (UID: "f40ba1d1-a055-487e-b779-171ce0f656a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.843883 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k2dh\" (UniqueName: \"kubernetes.io/projected/f40ba1d1-a055-487e-b779-171ce0f656a2-kube-api-access-8k2dh\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.843925 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.852313 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-config-data" (OuterVolumeSpecName: "config-data") pod "f40ba1d1-a055-487e-b779-171ce0f656a2" (UID: "f40ba1d1-a055-487e-b779-171ce0f656a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.865322 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f40ba1d1-a055-487e-b779-171ce0f656a2" (UID: "f40ba1d1-a055-487e-b779-171ce0f656a2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.946582 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.946641 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f40ba1d1-a055-487e-b779-171ce0f656a2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.965382 4704 generic.go:334] "Generic (PLEG): container finished" podID="f40ba1d1-a055-487e-b779-171ce0f656a2" containerID="73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" exitCode=0 Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.965451 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f40ba1d1-a055-487e-b779-171ce0f656a2","Type":"ContainerDied","Data":"73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56"} Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.965483 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f40ba1d1-a055-487e-b779-171ce0f656a2","Type":"ContainerDied","Data":"517774b35118359e325bc88493ae7a8620161d528df6e3f5b59e335d3d821d73"} Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.965503 4704 scope.go:117] "RemoveContainer" containerID="73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.965644 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.976958 4704 generic.go:334] "Generic (PLEG): container finished" podID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerID="3c4de701a0baa1111fd36d0989bda4ccd041b8ff70804e682f4c5be1b3b9c83d" exitCode=0 Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.977003 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerDied","Data":"3c4de701a0baa1111fd36d0989bda4ccd041b8ff70804e682f4c5be1b3b9c83d"} Jan 22 16:54:32 crc kubenswrapper[4704]: I0122 16:54:32.977427 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.000133 4704 scope.go:117] "RemoveContainer" containerID="73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" Jan 22 16:54:33 crc kubenswrapper[4704]: E0122 16:54:33.000680 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56\": container with ID starting with 73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56 not found: ID does not exist" containerID="73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.000723 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56"} err="failed to get container status \"73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56\": rpc error: code = NotFound desc = could not find container \"73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56\": container with ID starting with 73588abf2adc503ee831f5bc9f2d7f7934d4ba1853496c9f1eb77842b81d8d56 not found: ID does not exist" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.035272 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.046402 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049362 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzcg6\" (UniqueName: \"kubernetes.io/projected/c75c5f08-663e-45b0-a9f3-4e43a1893fde-kube-api-access-wzcg6\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049439 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-config-data\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049510 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-sg-core-conf-yaml\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049548 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-log-httpd\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049591 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-scripts\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049631 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-run-httpd\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049653 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-ceilometer-tls-certs\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.049683 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-combined-ca-bundle\") pod \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\" (UID: \"c75c5f08-663e-45b0-a9f3-4e43a1893fde\") " Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.050007 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.050540 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.053609 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75c5f08-663e-45b0-a9f3-4e43a1893fde-kube-api-access-wzcg6" (OuterVolumeSpecName: "kube-api-access-wzcg6") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "kube-api-access-wzcg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.073965 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-scripts" (OuterVolumeSpecName: "scripts") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.089500 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.115155 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.134918 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.151651 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.151933 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.152040 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c75c5f08-663e-45b0-a9f3-4e43a1893fde-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.152126 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.152415 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.152513 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzcg6\" (UniqueName: \"kubernetes.io/projected/c75c5f08-663e-45b0-a9f3-4e43a1893fde-kube-api-access-wzcg6\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.152628 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.171488 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-config-data" (OuterVolumeSpecName: "config-data") pod "c75c5f08-663e-45b0-a9f3-4e43a1893fde" (UID: "c75c5f08-663e-45b0-a9f3-4e43a1893fde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.254028 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75c5f08-663e-45b0-a9f3-4e43a1893fde-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.648653 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b065d253-835b-4186-b4f0-7b4cca0c0858" path="/var/lib/kubelet/pods/b065d253-835b-4186-b4f0-7b4cca0c0858/volumes" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.649289 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d" path="/var/lib/kubelet/pods/c9bb3f29-5eb9-483a-b3e0-bf51383ffd0d/volumes" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.649997 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40ba1d1-a055-487e-b779-171ce0f656a2" path="/var/lib/kubelet/pods/f40ba1d1-a055-487e-b779-171ce0f656a2/volumes" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.655522 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b2cb7a-380b-4064-b7fe-100955d2132e" path="/var/lib/kubelet/pods/f6b2cb7a-380b-4064-b7fe-100955d2132e/volumes" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.825679 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-rj4nk"] Jan 22 16:54:33 crc kubenswrapper[4704]: E0122 16:54:33.826032 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="sg-core" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826046 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="sg-core" Jan 22 16:54:33 crc kubenswrapper[4704]: E0122 16:54:33.826055 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40ba1d1-a055-487e-b779-171ce0f656a2" containerName="watcher-applier" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826060 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40ba1d1-a055-487e-b779-171ce0f656a2" containerName="watcher-applier" Jan 22 16:54:33 crc kubenswrapper[4704]: E0122 16:54:33.826072 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-central-agent" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826078 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-central-agent" Jan 22 16:54:33 crc kubenswrapper[4704]: E0122 16:54:33.826096 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-notification-agent" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826103 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-notification-agent" Jan 22 16:54:33 crc kubenswrapper[4704]: E0122 16:54:33.826115 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="proxy-httpd" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826121 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="proxy-httpd" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826256 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="proxy-httpd" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826269 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="sg-core" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826278 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-notification-agent" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826290 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" containerName="ceilometer-central-agent" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826298 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40ba1d1-a055-487e-b779-171ce0f656a2" containerName="watcher-applier" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.826777 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.839554 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rj4nk"] Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.852517 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-4455-account-create-update-t2wb8"] Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.870205 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.874058 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4455-account-create-update-t2wb8"] Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.874082 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.972944 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhdq7\" (UniqueName: \"kubernetes.io/projected/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-kube-api-access-lhdq7\") pod \"watcher-4455-account-create-update-t2wb8\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.973023 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-operator-scripts\") pod \"watcher-4455-account-create-update-t2wb8\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.973088 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8a1d55f-0694-4d21-866a-2304b23d5864-operator-scripts\") pod \"watcher-db-create-rj4nk\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.973201 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptxv\" (UniqueName: \"kubernetes.io/projected/a8a1d55f-0694-4d21-866a-2304b23d5864-kube-api-access-2ptxv\") pod \"watcher-db-create-rj4nk\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.990724 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c75c5f08-663e-45b0-a9f3-4e43a1893fde","Type":"ContainerDied","Data":"65ee2ec19d0d262e6cd249912382bf7330918e75808b8fd949c2009f40ffbb56"} Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.990773 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:33 crc kubenswrapper[4704]: I0122 16:54:33.990811 4704 scope.go:117] "RemoveContainer" containerID="411e33787a0bcb79169e5cc0ea57f3f700d88e327b2c46b8863070b13928892b" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.017768 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.026919 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.029363 4704 scope.go:117] "RemoveContainer" containerID="e16954d834f6643b39ce1da201ffcdfd2715d34294dc39d616e74a897c5f1d3d" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.040502 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.044097 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.048127 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.048162 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.050000 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.075063 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ptxv\" (UniqueName: \"kubernetes.io/projected/a8a1d55f-0694-4d21-866a-2304b23d5864-kube-api-access-2ptxv\") pod \"watcher-db-create-rj4nk\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.075414 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhdq7\" (UniqueName: \"kubernetes.io/projected/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-kube-api-access-lhdq7\") pod \"watcher-4455-account-create-update-t2wb8\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.075616 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-operator-scripts\") pod \"watcher-4455-account-create-update-t2wb8\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.076038 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8a1d55f-0694-4d21-866a-2304b23d5864-operator-scripts\") pod \"watcher-db-create-rj4nk\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.075185 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.076566 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-operator-scripts\") pod \"watcher-4455-account-create-update-t2wb8\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.076674 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8a1d55f-0694-4d21-866a-2304b23d5864-operator-scripts\") pod \"watcher-db-create-rj4nk\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.079214 4704 scope.go:117] "RemoveContainer" containerID="3c4de701a0baa1111fd36d0989bda4ccd041b8ff70804e682f4c5be1b3b9c83d" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.094485 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhdq7\" (UniqueName: \"kubernetes.io/projected/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-kube-api-access-lhdq7\") pod \"watcher-4455-account-create-update-t2wb8\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.102406 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ptxv\" (UniqueName: \"kubernetes.io/projected/a8a1d55f-0694-4d21-866a-2304b23d5864-kube-api-access-2ptxv\") pod \"watcher-db-create-rj4nk\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.117992 4704 scope.go:117] "RemoveContainer" containerID="9f28656b1463a710817e11346a593846b4b643acdca54aa3bfde8a4cba29009a" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.144249 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177220 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-scripts\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177283 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177341 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-run-httpd\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177367 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-log-httpd\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177407 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-config-data\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177442 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzd5p\" (UniqueName: \"kubernetes.io/projected/c85e979b-2349-4140-a9b7-295eff282279-kube-api-access-bzd5p\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177487 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.177518 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.187248 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278612 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278676 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-run-httpd\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278699 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-log-httpd\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278729 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-config-data\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278757 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzd5p\" (UniqueName: \"kubernetes.io/projected/c85e979b-2349-4140-a9b7-295eff282279-kube-api-access-bzd5p\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278793 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278834 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.278867 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-scripts\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.284741 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-run-httpd\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.285037 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-scripts\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.287337 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-log-httpd\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.289383 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.289668 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.291924 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-config-data\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.293627 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.312732 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzd5p\" (UniqueName: \"kubernetes.io/projected/c85e979b-2349-4140-a9b7-295eff282279-kube-api-access-bzd5p\") pod \"ceilometer-0\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.378435 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.668575 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rj4nk"] Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.759783 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4455-account-create-update-t2wb8"] Jan 22 16:54:34 crc kubenswrapper[4704]: I0122 16:54:34.921855 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 16:54:34 crc kubenswrapper[4704]: W0122 16:54:34.941102 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc85e979b_2349_4140_a9b7_295eff282279.slice/crio-66a8d3662303723ed0f50647090183e807917cd5678a5c1fc36ef8bd9066f21e WatchSource:0}: Error finding container 66a8d3662303723ed0f50647090183e807917cd5678a5c1fc36ef8bd9066f21e: Status 404 returned error can't find the container with id 66a8d3662303723ed0f50647090183e807917cd5678a5c1fc36ef8bd9066f21e Jan 22 16:54:35 crc kubenswrapper[4704]: I0122 16:54:35.001109 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-rj4nk" event={"ID":"a8a1d55f-0694-4d21-866a-2304b23d5864","Type":"ContainerStarted","Data":"ce05bb736f5a38b056227d006d432f47123ecab9724be473e15b8a22e5e80342"} Jan 22 16:54:35 crc kubenswrapper[4704]: I0122 16:54:35.003311 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" event={"ID":"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6","Type":"ContainerStarted","Data":"5faa29e3088db6c52ce28e8d54f1e34b1d39b7be34c62916b75a254d5c0fbc90"} Jan 22 16:54:35 crc kubenswrapper[4704]: I0122 16:54:35.004783 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerStarted","Data":"66a8d3662303723ed0f50647090183e807917cd5678a5c1fc36ef8bd9066f21e"} Jan 22 16:54:35 crc kubenswrapper[4704]: I0122 16:54:35.643257 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75c5f08-663e-45b0-a9f3-4e43a1893fde" path="/var/lib/kubelet/pods/c75c5f08-663e-45b0-a9f3-4e43a1893fde/volumes" Jan 22 16:54:36 crc kubenswrapper[4704]: I0122 16:54:36.017313 4704 generic.go:334] "Generic (PLEG): container finished" podID="a8a1d55f-0694-4d21-866a-2304b23d5864" containerID="42e6953a7ae21d3be1de4329f293ebcf76f7dbd9401643e140639f63099dd8b9" exitCode=0 Jan 22 16:54:36 crc kubenswrapper[4704]: I0122 16:54:36.017404 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-rj4nk" event={"ID":"a8a1d55f-0694-4d21-866a-2304b23d5864","Type":"ContainerDied","Data":"42e6953a7ae21d3be1de4329f293ebcf76f7dbd9401643e140639f63099dd8b9"} Jan 22 16:54:36 crc kubenswrapper[4704]: I0122 16:54:36.020530 4704 generic.go:334] "Generic (PLEG): container finished" podID="1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6" containerID="264f2b0fab046086e5221dca03bf024561e0ba8a3035b810dc2bc349a3fd331a" exitCode=0 Jan 22 16:54:36 crc kubenswrapper[4704]: I0122 16:54:36.020580 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" event={"ID":"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6","Type":"ContainerDied","Data":"264f2b0fab046086e5221dca03bf024561e0ba8a3035b810dc2bc349a3fd331a"} Jan 22 16:54:36 crc kubenswrapper[4704]: I0122 16:54:36.022099 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerStarted","Data":"a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e"} Jan 22 16:54:36 crc kubenswrapper[4704]: I0122 16:54:36.633719 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:54:36 crc kubenswrapper[4704]: E0122 16:54:36.634002 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.527386 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.534236 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.635806 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ptxv\" (UniqueName: \"kubernetes.io/projected/a8a1d55f-0694-4d21-866a-2304b23d5864-kube-api-access-2ptxv\") pod \"a8a1d55f-0694-4d21-866a-2304b23d5864\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.635868 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-operator-scripts\") pod \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.636022 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8a1d55f-0694-4d21-866a-2304b23d5864-operator-scripts\") pod \"a8a1d55f-0694-4d21-866a-2304b23d5864\" (UID: \"a8a1d55f-0694-4d21-866a-2304b23d5864\") " Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.636081 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhdq7\" (UniqueName: \"kubernetes.io/projected/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-kube-api-access-lhdq7\") pod \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\" (UID: \"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6\") " Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.639274 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6" (UID: "1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.640238 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a1d55f-0694-4d21-866a-2304b23d5864-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8a1d55f-0694-4d21-866a-2304b23d5864" (UID: "a8a1d55f-0694-4d21-866a-2304b23d5864"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.643248 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a1d55f-0694-4d21-866a-2304b23d5864-kube-api-access-2ptxv" (OuterVolumeSpecName: "kube-api-access-2ptxv") pod "a8a1d55f-0694-4d21-866a-2304b23d5864" (UID: "a8a1d55f-0694-4d21-866a-2304b23d5864"). InnerVolumeSpecName "kube-api-access-2ptxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.644234 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-kube-api-access-lhdq7" (OuterVolumeSpecName: "kube-api-access-lhdq7") pod "1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6" (UID: "1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6"). InnerVolumeSpecName "kube-api-access-lhdq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.738207 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhdq7\" (UniqueName: \"kubernetes.io/projected/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-kube-api-access-lhdq7\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.738492 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ptxv\" (UniqueName: \"kubernetes.io/projected/a8a1d55f-0694-4d21-866a-2304b23d5864-kube-api-access-2ptxv\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.738565 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:37 crc kubenswrapper[4704]: I0122 16:54:37.738580 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8a1d55f-0694-4d21-866a-2304b23d5864-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.047740 4704 generic.go:334] "Generic (PLEG): container finished" podID="9b1e8af3-da39-42d0-bc3e-5be66c218bfe" containerID="bdbf288f18c263073fb76cd8de18184b8d75817fae5e7f084932161991531f51" exitCode=0 Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.047874 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2tck" event={"ID":"9b1e8af3-da39-42d0-bc3e-5be66c218bfe","Type":"ContainerDied","Data":"bdbf288f18c263073fb76cd8de18184b8d75817fae5e7f084932161991531f51"} Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.050636 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" event={"ID":"1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6","Type":"ContainerDied","Data":"5faa29e3088db6c52ce28e8d54f1e34b1d39b7be34c62916b75a254d5c0fbc90"} Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.050669 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5faa29e3088db6c52ce28e8d54f1e34b1d39b7be34c62916b75a254d5c0fbc90" Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.050723 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4455-account-create-update-t2wb8" Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.052706 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerStarted","Data":"3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480"} Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.061613 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-rj4nk" event={"ID":"a8a1d55f-0694-4d21-866a-2304b23d5864","Type":"ContainerDied","Data":"ce05bb736f5a38b056227d006d432f47123ecab9724be473e15b8a22e5e80342"} Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.061715 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce05bb736f5a38b056227d006d432f47123ecab9724be473e15b8a22e5e80342" Jan 22 16:54:38 crc kubenswrapper[4704]: I0122 16:54:38.061881 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-rj4nk" Jan 22 16:54:39 crc kubenswrapper[4704]: I0122 16:54:39.070321 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerStarted","Data":"52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614"} Jan 22 16:54:39 crc kubenswrapper[4704]: I0122 16:54:39.072353 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2tck" event={"ID":"9b1e8af3-da39-42d0-bc3e-5be66c218bfe","Type":"ContainerStarted","Data":"c8d8baf58eb298c221d91e26b31ae3e595ac67d1eb7e38db2d7a41c03aa9adfb"} Jan 22 16:54:39 crc kubenswrapper[4704]: I0122 16:54:39.092950 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r2tck" podStartSLOduration=2.591173269 podStartE2EDuration="9.092924207s" podCreationTimestamp="2026-01-22 16:54:30 +0000 UTC" firstStartedPulling="2026-01-22 16:54:31.948647241 +0000 UTC m=+1564.593193941" lastFinishedPulling="2026-01-22 16:54:38.450398179 +0000 UTC m=+1571.094944879" observedRunningTime="2026-01-22 16:54:39.088260814 +0000 UTC m=+1571.732807514" watchObservedRunningTime="2026-01-22 16:54:39.092924207 +0000 UTC m=+1571.737470907" Jan 22 16:54:40 crc kubenswrapper[4704]: I0122 16:54:40.082869 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerStarted","Data":"daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f"} Jan 22 16:54:40 crc kubenswrapper[4704]: I0122 16:54:40.107267 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.342583426 podStartE2EDuration="6.107241188s" podCreationTimestamp="2026-01-22 16:54:34 +0000 UTC" firstStartedPulling="2026-01-22 16:54:34.945763442 +0000 UTC m=+1567.590310142" lastFinishedPulling="2026-01-22 16:54:39.710421194 +0000 UTC m=+1572.354967904" observedRunningTime="2026-01-22 16:54:40.101015091 +0000 UTC m=+1572.745561811" watchObservedRunningTime="2026-01-22 16:54:40.107241188 +0000 UTC m=+1572.751787908" Jan 22 16:54:41 crc kubenswrapper[4704]: I0122 16:54:41.001396 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:41 crc kubenswrapper[4704]: I0122 16:54:41.001643 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:41 crc kubenswrapper[4704]: I0122 16:54:41.053626 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:41 crc kubenswrapper[4704]: I0122 16:54:41.096393 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:54:48 crc kubenswrapper[4704]: I0122 16:54:48.634402 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:54:48 crc kubenswrapper[4704]: E0122 16:54:48.635340 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:54:51 crc kubenswrapper[4704]: I0122 16:54:51.123449 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r2tck" Jan 22 16:54:54 crc kubenswrapper[4704]: I0122 16:54:54.078834 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2tck"] Jan 22 16:54:54 crc kubenswrapper[4704]: I0122 16:54:54.663206 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-df2wn"] Jan 22 16:54:54 crc kubenswrapper[4704]: I0122 16:54:54.663466 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-df2wn" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="registry-server" containerID="cri-o://33d508da5314958930af712b64252d236001d921731ec4bb77574dbb7c49cca5" gracePeriod=2 Jan 22 16:54:56 crc kubenswrapper[4704]: I0122 16:54:56.230595 4704 generic.go:334] "Generic (PLEG): container finished" podID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerID="33d508da5314958930af712b64252d236001d921731ec4bb77574dbb7c49cca5" exitCode=0 Jan 22 16:54:56 crc kubenswrapper[4704]: I0122 16:54:56.230683 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df2wn" event={"ID":"5f7f834d-3a2e-41b1-9b80-6cc0911843a8","Type":"ContainerDied","Data":"33d508da5314958930af712b64252d236001d921731ec4bb77574dbb7c49cca5"} Jan 22 16:54:56 crc kubenswrapper[4704]: I0122 16:54:56.994016 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.056809 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-catalog-content\") pod \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.056979 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zln7p\" (UniqueName: \"kubernetes.io/projected/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-kube-api-access-zln7p\") pod \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.057124 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-utilities\") pod \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\" (UID: \"5f7f834d-3a2e-41b1-9b80-6cc0911843a8\") " Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.057906 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-utilities" (OuterVolumeSpecName: "utilities") pod "5f7f834d-3a2e-41b1-9b80-6cc0911843a8" (UID: "5f7f834d-3a2e-41b1-9b80-6cc0911843a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.069201 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-kube-api-access-zln7p" (OuterVolumeSpecName: "kube-api-access-zln7p") pod "5f7f834d-3a2e-41b1-9b80-6cc0911843a8" (UID: "5f7f834d-3a2e-41b1-9b80-6cc0911843a8"). InnerVolumeSpecName "kube-api-access-zln7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.104909 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f7f834d-3a2e-41b1-9b80-6cc0911843a8" (UID: "5f7f834d-3a2e-41b1-9b80-6cc0911843a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.159016 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zln7p\" (UniqueName: \"kubernetes.io/projected/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-kube-api-access-zln7p\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.159060 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.159072 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7f834d-3a2e-41b1-9b80-6cc0911843a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.244517 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-df2wn" event={"ID":"5f7f834d-3a2e-41b1-9b80-6cc0911843a8","Type":"ContainerDied","Data":"e8973c6df9e98e389fa61a54d95d71b6a9911acbbf0b4b8192f1945d35d376d1"} Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.244611 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-df2wn" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.244621 4704 scope.go:117] "RemoveContainer" containerID="33d508da5314958930af712b64252d236001d921731ec4bb77574dbb7c49cca5" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.268904 4704 scope.go:117] "RemoveContainer" containerID="7ea43522f3392d937da2b4561c886add7c83eaa3552bdbf538c892b6e236eac0" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.283985 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-df2wn"] Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.290232 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-df2wn"] Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.325390 4704 scope.go:117] "RemoveContainer" containerID="244beae2cb72799177c34274bd654565eb98ed62db5eb58af574891ef96c9c77" Jan 22 16:54:57 crc kubenswrapper[4704]: I0122 16:54:57.647844 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" path="/var/lib/kubelet/pods/5f7f834d-3a2e-41b1-9b80-6cc0911843a8/volumes" Jan 22 16:55:00 crc kubenswrapper[4704]: I0122 16:55:00.633124 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:55:00 crc kubenswrapper[4704]: E0122 16:55:00.633830 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:55:04 crc kubenswrapper[4704]: I0122 16:55:04.482756 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 16:55:13 crc kubenswrapper[4704]: I0122 16:55:13.634162 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:55:13 crc kubenswrapper[4704]: E0122 16:55:13.635018 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:55:26 crc kubenswrapper[4704]: I0122 16:55:26.634461 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:55:26 crc kubenswrapper[4704]: E0122 16:55:26.635422 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:55:37 crc kubenswrapper[4704]: I0122 16:55:37.333634 4704 scope.go:117] "RemoveContainer" containerID="bd781ce7f268cfa7db8b9de30cae702a6786d112de0c8dd39f040f99949a9ddc" Jan 22 16:55:37 crc kubenswrapper[4704]: I0122 16:55:37.377974 4704 scope.go:117] "RemoveContainer" containerID="99fb9373addcecd0349506a59cd1d6e42e4816c33e45b8128d6e638b9cc2613f" Jan 22 16:55:39 crc kubenswrapper[4704]: I0122 16:55:39.634566 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:55:39 crc kubenswrapper[4704]: E0122 16:55:39.635715 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:55:50 crc kubenswrapper[4704]: I0122 16:55:50.633821 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:55:50 crc kubenswrapper[4704]: E0122 16:55:50.634483 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:56:04 crc kubenswrapper[4704]: I0122 16:56:04.634251 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:56:04 crc kubenswrapper[4704]: E0122 16:56:04.636057 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:56:15 crc kubenswrapper[4704]: I0122 16:56:15.634547 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:56:15 crc kubenswrapper[4704]: E0122 16:56:15.635504 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:56:30 crc kubenswrapper[4704]: I0122 16:56:30.634592 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:56:30 crc kubenswrapper[4704]: E0122 16:56:30.635623 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:56:37 crc kubenswrapper[4704]: I0122 16:56:37.455147 4704 scope.go:117] "RemoveContainer" containerID="51aba93cbc57783b7925f69e5e1b668a2a53d2b7e61ea22b550798c72b4c6bb5" Jan 22 16:56:43 crc kubenswrapper[4704]: I0122 16:56:43.634147 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:56:43 crc kubenswrapper[4704]: E0122 16:56:43.634924 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:56:55 crc kubenswrapper[4704]: I0122 16:56:55.633774 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:56:55 crc kubenswrapper[4704]: E0122 16:56:55.634616 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:57:06 crc kubenswrapper[4704]: I0122 16:57:06.633445 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:57:06 crc kubenswrapper[4704]: E0122 16:57:06.634249 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:57:19 crc kubenswrapper[4704]: I0122 16:57:19.633532 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:57:19 crc kubenswrapper[4704]: E0122 16:57:19.634414 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:57:30 crc kubenswrapper[4704]: I0122 16:57:30.634115 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:57:30 crc kubenswrapper[4704]: E0122 16:57:30.634727 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:57:37 crc kubenswrapper[4704]: I0122 16:57:37.526096 4704 scope.go:117] "RemoveContainer" containerID="3160f10dca44b170509667434366190a74b0d800b9a5a17c26f195e0a3e8ab47" Jan 22 16:57:37 crc kubenswrapper[4704]: I0122 16:57:37.557328 4704 scope.go:117] "RemoveContainer" containerID="84c66062483c0742c9b1302ab7ba8990c0e8ba55e393f1e6dbc1cd3556677351" Jan 22 16:57:37 crc kubenswrapper[4704]: I0122 16:57:37.621672 4704 scope.go:117] "RemoveContainer" containerID="89c4e83b4ac48c352d7d9291a182158eb1da884bea85d2ded26f1468caf634d3" Jan 22 16:57:37 crc kubenswrapper[4704]: I0122 16:57:37.675867 4704 scope.go:117] "RemoveContainer" containerID="448c1438da4c7b284c12fca557ea491c97c6bcfa93d7d80f3910b62391eaa940" Jan 22 16:57:41 crc kubenswrapper[4704]: I0122 16:57:41.633905 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:57:41 crc kubenswrapper[4704]: E0122 16:57:41.635385 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:57:54 crc kubenswrapper[4704]: I0122 16:57:54.633418 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:57:54 crc kubenswrapper[4704]: E0122 16:57:54.634229 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:58:05 crc kubenswrapper[4704]: I0122 16:58:05.633783 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:58:05 crc kubenswrapper[4704]: E0122 16:58:05.634594 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:58:18 crc kubenswrapper[4704]: I0122 16:58:18.634284 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:58:18 crc kubenswrapper[4704]: E0122 16:58:18.635184 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 16:58:32 crc kubenswrapper[4704]: I0122 16:58:32.634241 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 16:58:33 crc kubenswrapper[4704]: I0122 16:58:33.170299 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"fbfd2dfdd7d5192b0d486e087debbb041d258bd9f348744c87a1d512ab989a16"} Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.766578 4704 scope.go:117] "RemoveContainer" containerID="be4ac6d6e4ed96f67d644cde4a3b30ff6254ff429a947e9db3260e2ec1c9415c" Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.787296 4704 scope.go:117] "RemoveContainer" containerID="3602bc548fb24dd57cc5ae10664d11e46749da8779138552bb85719b7fc625b7" Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.815803 4704 scope.go:117] "RemoveContainer" containerID="2b1dbe0213f448866562a96e05a72bc97bc23264ca8ad2d6417d38ded492bdb2" Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.856989 4704 scope.go:117] "RemoveContainer" containerID="c0d03b0a34ae3554163e9f1ad62099484875c3c3e58f94ebd7258641a4d1aa19" Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.889367 4704 scope.go:117] "RemoveContainer" containerID="dfcd54f30faf33003ace5ad9f74039a0048395e7e2bc9eb554607164ff715205" Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.908207 4704 scope.go:117] "RemoveContainer" containerID="832888ea865d6efe665cbcccd50b683001940b8ddd1731a695ebfeee3e36ed5e" Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.968860 4704 scope.go:117] "RemoveContainer" containerID="e6b4beb1185b52c1b1447eb468bbf9a959ad5c8c15c89042fdabe3e6bd203014" Jan 22 16:58:37 crc kubenswrapper[4704]: I0122 16:58:37.988888 4704 scope.go:117] "RemoveContainer" containerID="72d8ecab972575ac425308b65ee55f9f77ae9838ea331957c66459d8ba740734" Jan 22 16:58:54 crc kubenswrapper[4704]: I0122 16:58:54.053436 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-rh526"] Jan 22 16:58:54 crc kubenswrapper[4704]: I0122 16:58:54.059404 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-g7j9b"] Jan 22 16:58:54 crc kubenswrapper[4704]: I0122 16:58:54.071167 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-d016-account-create-update-dgxmw"] Jan 22 16:58:54 crc kubenswrapper[4704]: I0122 16:58:54.079519 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-d016-account-create-update-dgxmw"] Jan 22 16:58:54 crc kubenswrapper[4704]: I0122 16:58:54.085062 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-rh526"] Jan 22 16:58:54 crc kubenswrapper[4704]: I0122 16:58:54.091679 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-g7j9b"] Jan 22 16:58:55 crc kubenswrapper[4704]: I0122 16:58:55.659384 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce39ad9-5a21-4580-9adc-e2e23fc4bc69" path="/var/lib/kubelet/pods/2ce39ad9-5a21-4580-9adc-e2e23fc4bc69/volumes" Jan 22 16:58:55 crc kubenswrapper[4704]: I0122 16:58:55.660230 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59966f70-fec7-4445-8284-f9216b4ca610" path="/var/lib/kubelet/pods/59966f70-fec7-4445-8284-f9216b4ca610/volumes" Jan 22 16:58:55 crc kubenswrapper[4704]: I0122 16:58:55.661076 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2e480d-279d-4896-84a6-638c9b870958" path="/var/lib/kubelet/pods/bc2e480d-279d-4896-84a6-638c9b870958/volumes" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.818119 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher4455-account-delete-hn7db"] Jan 22 16:59:33 crc kubenswrapper[4704]: E0122 16:59:33.819125 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="extract-utilities" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819144 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="extract-utilities" Jan 22 16:59:33 crc kubenswrapper[4704]: E0122 16:59:33.819171 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="extract-content" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819179 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="extract-content" Jan 22 16:59:33 crc kubenswrapper[4704]: E0122 16:59:33.819193 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8a1d55f-0694-4d21-866a-2304b23d5864" containerName="mariadb-database-create" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819201 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8a1d55f-0694-4d21-866a-2304b23d5864" containerName="mariadb-database-create" Jan 22 16:59:33 crc kubenswrapper[4704]: E0122 16:59:33.819219 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6" containerName="mariadb-account-create-update" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819227 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6" containerName="mariadb-account-create-update" Jan 22 16:59:33 crc kubenswrapper[4704]: E0122 16:59:33.819240 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="registry-server" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819248 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="registry-server" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819441 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f7f834d-3a2e-41b1-9b80-6cc0911843a8" containerName="registry-server" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819461 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8a1d55f-0694-4d21-866a-2304b23d5864" containerName="mariadb-database-create" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.819468 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6" containerName="mariadb-account-create-update" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.820167 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.851833 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4455-account-delete-hn7db"] Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.921943 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fe8198b-2104-49a0-a733-d69a6adf0a1a-operator-scripts\") pod \"watcher4455-account-delete-hn7db\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:33 crc kubenswrapper[4704]: I0122 16:59:33.922127 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs5hj\" (UniqueName: \"kubernetes.io/projected/0fe8198b-2104-49a0-a733-d69a6adf0a1a-kube-api-access-fs5hj\") pod \"watcher4455-account-delete-hn7db\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.023752 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fe8198b-2104-49a0-a733-d69a6adf0a1a-operator-scripts\") pod \"watcher4455-account-delete-hn7db\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.023860 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs5hj\" (UniqueName: \"kubernetes.io/projected/0fe8198b-2104-49a0-a733-d69a6adf0a1a-kube-api-access-fs5hj\") pod \"watcher4455-account-delete-hn7db\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.024563 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fe8198b-2104-49a0-a733-d69a6adf0a1a-operator-scripts\") pod \"watcher4455-account-delete-hn7db\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.032128 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-kw9c6"] Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.038323 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-kw9c6"] Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.047836 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs5hj\" (UniqueName: \"kubernetes.io/projected/0fe8198b-2104-49a0-a733-d69a6adf0a1a-kube-api-access-fs5hj\") pod \"watcher4455-account-delete-hn7db\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.141460 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.612624 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4455-account-delete-hn7db"] Jan 22 16:59:34 crc kubenswrapper[4704]: I0122 16:59:34.636106 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" event={"ID":"0fe8198b-2104-49a0-a733-d69a6adf0a1a","Type":"ContainerStarted","Data":"d18d0dbd1d0ec6cfb2144b5bb0558fd99ec687425a97a4459448d239fbbaf56a"} Jan 22 16:59:35 crc kubenswrapper[4704]: I0122 16:59:35.658156 4704 generic.go:334] "Generic (PLEG): container finished" podID="0fe8198b-2104-49a0-a733-d69a6adf0a1a" containerID="8686670078b96dc7ab4fa75139ef50eef55b4c8611c67041e7d9e25e4cd25fe3" exitCode=0 Jan 22 16:59:35 crc kubenswrapper[4704]: I0122 16:59:35.663281 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e980c7c4-ea1e-4496-a188-da0c060ccbb3" path="/var/lib/kubelet/pods/e980c7c4-ea1e-4496-a188-da0c060ccbb3/volumes" Jan 22 16:59:35 crc kubenswrapper[4704]: I0122 16:59:35.664692 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" event={"ID":"0fe8198b-2104-49a0-a733-d69a6adf0a1a","Type":"ContainerDied","Data":"8686670078b96dc7ab4fa75139ef50eef55b4c8611c67041e7d9e25e4cd25fe3"} Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.001665 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.069291 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g4tv2"] Jan 22 16:59:37 crc kubenswrapper[4704]: E0122 16:59:37.069891 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe8198b-2104-49a0-a733-d69a6adf0a1a" containerName="mariadb-account-delete" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.069921 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe8198b-2104-49a0-a733-d69a6adf0a1a" containerName="mariadb-account-delete" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.070179 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe8198b-2104-49a0-a733-d69a6adf0a1a" containerName="mariadb-account-delete" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.072054 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.079837 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4tv2"] Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.181406 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs5hj\" (UniqueName: \"kubernetes.io/projected/0fe8198b-2104-49a0-a733-d69a6adf0a1a-kube-api-access-fs5hj\") pod \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.181484 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fe8198b-2104-49a0-a733-d69a6adf0a1a-operator-scripts\") pod \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\" (UID: \"0fe8198b-2104-49a0-a733-d69a6adf0a1a\") " Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.181652 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-catalog-content\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.181721 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wz8r\" (UniqueName: \"kubernetes.io/projected/a317b8ac-2de4-4e21-b74f-14690e86be56-kube-api-access-9wz8r\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.181872 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-utilities\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.182367 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe8198b-2104-49a0-a733-d69a6adf0a1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fe8198b-2104-49a0-a733-d69a6adf0a1a" (UID: "0fe8198b-2104-49a0-a733-d69a6adf0a1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.187548 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe8198b-2104-49a0-a733-d69a6adf0a1a-kube-api-access-fs5hj" (OuterVolumeSpecName: "kube-api-access-fs5hj") pod "0fe8198b-2104-49a0-a733-d69a6adf0a1a" (UID: "0fe8198b-2104-49a0-a733-d69a6adf0a1a"). InnerVolumeSpecName "kube-api-access-fs5hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.283778 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wz8r\" (UniqueName: \"kubernetes.io/projected/a317b8ac-2de4-4e21-b74f-14690e86be56-kube-api-access-9wz8r\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.283907 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-utilities\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.283969 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-catalog-content\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.284060 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs5hj\" (UniqueName: \"kubernetes.io/projected/0fe8198b-2104-49a0-a733-d69a6adf0a1a-kube-api-access-fs5hj\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.284075 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fe8198b-2104-49a0-a733-d69a6adf0a1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.284444 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-utilities\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.284524 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-catalog-content\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.304030 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wz8r\" (UniqueName: \"kubernetes.io/projected/a317b8ac-2de4-4e21-b74f-14690e86be56-kube-api-access-9wz8r\") pod \"community-operators-g4tv2\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.396320 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.674618 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" event={"ID":"0fe8198b-2104-49a0-a733-d69a6adf0a1a","Type":"ContainerDied","Data":"d18d0dbd1d0ec6cfb2144b5bb0558fd99ec687425a97a4459448d239fbbaf56a"} Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.674900 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d18d0dbd1d0ec6cfb2144b5bb0558fd99ec687425a97a4459448d239fbbaf56a" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.674953 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4455-account-delete-hn7db" Jan 22 16:59:37 crc kubenswrapper[4704]: I0122 16:59:37.889272 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g4tv2"] Jan 22 16:59:37 crc kubenswrapper[4704]: W0122 16:59:37.891717 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda317b8ac_2de4_4e21_b74f_14690e86be56.slice/crio-90d7486fcd2865cec0350850a4146322f1ac709295c5025e414ddd9792b32eb9 WatchSource:0}: Error finding container 90d7486fcd2865cec0350850a4146322f1ac709295c5025e414ddd9792b32eb9: Status 404 returned error can't find the container with id 90d7486fcd2865cec0350850a4146322f1ac709295c5025e414ddd9792b32eb9 Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.117321 4704 scope.go:117] "RemoveContainer" containerID="ba1f32027fd0f7d936c42b1430588f5284c4255ae8a28fc519c4f21563cfbbbc" Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.184960 4704 scope.go:117] "RemoveContainer" containerID="1cbaa70673d363d3b1484242899ac4ae72d21e2821aedebf1ed3c7c86b666fce" Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.230212 4704 scope.go:117] "RemoveContainer" containerID="7ce7aca866bc88ce286aed2e6b4312002f7e2bca81e58995f8ca8878cf634cbb" Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.259101 4704 scope.go:117] "RemoveContainer" containerID="403176ccb83b11ca30c547005a1a5859a5e67e576901abf2d5b18f7088b0ad7f" Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.682989 4704 generic.go:334] "Generic (PLEG): container finished" podID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerID="cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e" exitCode=0 Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.683038 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4tv2" event={"ID":"a317b8ac-2de4-4e21-b74f-14690e86be56","Type":"ContainerDied","Data":"cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e"} Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.683064 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4tv2" event={"ID":"a317b8ac-2de4-4e21-b74f-14690e86be56","Type":"ContainerStarted","Data":"90d7486fcd2865cec0350850a4146322f1ac709295c5025e414ddd9792b32eb9"} Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.685101 4704 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.843659 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rj4nk"] Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.859292 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-rj4nk"] Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.869134 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher4455-account-delete-hn7db"] Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.877045 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher4455-account-delete-hn7db"] Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.884487 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-4455-account-create-update-t2wb8"] Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.890616 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-4455-account-create-update-t2wb8"] Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.933468 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-w4tbs"] Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.934585 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:38 crc kubenswrapper[4704]: I0122 16:59:38.946485 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-w4tbs"] Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.029253 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2"] Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.030689 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.033965 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.042222 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2"] Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.124483 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-operator-scripts\") pod \"watcher-db-create-w4tbs\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.124546 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk5w5\" (UniqueName: \"kubernetes.io/projected/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-kube-api-access-kk5w5\") pod \"watcher-db-create-w4tbs\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.124706 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb42bced-7bce-43db-8cd9-efa728c629a4-operator-scripts\") pod \"watcher-aeae-account-create-update-w9nl2\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.124727 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9xb6\" (UniqueName: \"kubernetes.io/projected/bb42bced-7bce-43db-8cd9-efa728c629a4-kube-api-access-j9xb6\") pod \"watcher-aeae-account-create-update-w9nl2\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.226998 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk5w5\" (UniqueName: \"kubernetes.io/projected/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-kube-api-access-kk5w5\") pod \"watcher-db-create-w4tbs\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.227166 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb42bced-7bce-43db-8cd9-efa728c629a4-operator-scripts\") pod \"watcher-aeae-account-create-update-w9nl2\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.227208 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9xb6\" (UniqueName: \"kubernetes.io/projected/bb42bced-7bce-43db-8cd9-efa728c629a4-kube-api-access-j9xb6\") pod \"watcher-aeae-account-create-update-w9nl2\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.227339 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-operator-scripts\") pod \"watcher-db-create-w4tbs\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.228131 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb42bced-7bce-43db-8cd9-efa728c629a4-operator-scripts\") pod \"watcher-aeae-account-create-update-w9nl2\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.228195 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-operator-scripts\") pod \"watcher-db-create-w4tbs\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.257715 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9xb6\" (UniqueName: \"kubernetes.io/projected/bb42bced-7bce-43db-8cd9-efa728c629a4-kube-api-access-j9xb6\") pod \"watcher-aeae-account-create-update-w9nl2\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.260111 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk5w5\" (UniqueName: \"kubernetes.io/projected/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-kube-api-access-kk5w5\") pod \"watcher-db-create-w4tbs\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.278401 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.346147 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.571331 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-w4tbs"] Jan 22 16:59:39 crc kubenswrapper[4704]: W0122 16:59:39.580059 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddedfbfe9_091a_4b70_b6fe_e24214f2bbe7.slice/crio-e958eae274db6fcfb8957dfca538f8e94ef51e2509344e5c1e13134f40bce57b WatchSource:0}: Error finding container e958eae274db6fcfb8957dfca538f8e94ef51e2509344e5c1e13134f40bce57b: Status 404 returned error can't find the container with id e958eae274db6fcfb8957dfca538f8e94ef51e2509344e5c1e13134f40bce57b Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.654462 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe8198b-2104-49a0-a733-d69a6adf0a1a" path="/var/lib/kubelet/pods/0fe8198b-2104-49a0-a733-d69a6adf0a1a/volumes" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.655291 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6" path="/var/lib/kubelet/pods/1f7ad1ab-b1bc-4dea-8ffa-99644c1af7f6/volumes" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.656008 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a1d55f-0694-4d21-866a-2304b23d5864" path="/var/lib/kubelet/pods/a8a1d55f-0694-4d21-866a-2304b23d5864/volumes" Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.695648 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4tv2" event={"ID":"a317b8ac-2de4-4e21-b74f-14690e86be56","Type":"ContainerStarted","Data":"a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c"} Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.696957 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-w4tbs" event={"ID":"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7","Type":"ContainerStarted","Data":"e958eae274db6fcfb8957dfca538f8e94ef51e2509344e5c1e13134f40bce57b"} Jan 22 16:59:39 crc kubenswrapper[4704]: I0122 16:59:39.702058 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2"] Jan 22 16:59:40 crc kubenswrapper[4704]: I0122 16:59:40.706489 4704 generic.go:334] "Generic (PLEG): container finished" podID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerID="a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c" exitCode=0 Jan 22 16:59:40 crc kubenswrapper[4704]: I0122 16:59:40.706529 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4tv2" event={"ID":"a317b8ac-2de4-4e21-b74f-14690e86be56","Type":"ContainerDied","Data":"a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c"} Jan 22 16:59:40 crc kubenswrapper[4704]: I0122 16:59:40.709373 4704 generic.go:334] "Generic (PLEG): container finished" podID="dedfbfe9-091a-4b70-b6fe-e24214f2bbe7" containerID="19325db0a96b547ce615cccab8e0d7efeab30f5c6b6c5ecdf9edda4a673b1d0c" exitCode=0 Jan 22 16:59:40 crc kubenswrapper[4704]: I0122 16:59:40.709444 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-w4tbs" event={"ID":"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7","Type":"ContainerDied","Data":"19325db0a96b547ce615cccab8e0d7efeab30f5c6b6c5ecdf9edda4a673b1d0c"} Jan 22 16:59:40 crc kubenswrapper[4704]: I0122 16:59:40.711259 4704 generic.go:334] "Generic (PLEG): container finished" podID="bb42bced-7bce-43db-8cd9-efa728c629a4" containerID="cd97f2e4d15db4e70b0de4195401bdcd48ea6be29b89a0a4479cb95f841b3176" exitCode=0 Jan 22 16:59:40 crc kubenswrapper[4704]: I0122 16:59:40.711281 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" event={"ID":"bb42bced-7bce-43db-8cd9-efa728c629a4","Type":"ContainerDied","Data":"cd97f2e4d15db4e70b0de4195401bdcd48ea6be29b89a0a4479cb95f841b3176"} Jan 22 16:59:40 crc kubenswrapper[4704]: I0122 16:59:40.711295 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" event={"ID":"bb42bced-7bce-43db-8cd9-efa728c629a4","Type":"ContainerStarted","Data":"92aa449fc0cb2151c1a523231852a15431dd8f57ff0f3cfa5c997f6e3374736e"} Jan 22 16:59:41 crc kubenswrapper[4704]: I0122 16:59:41.735852 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4tv2" event={"ID":"a317b8ac-2de4-4e21-b74f-14690e86be56","Type":"ContainerStarted","Data":"7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342"} Jan 22 16:59:41 crc kubenswrapper[4704]: I0122 16:59:41.779785 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g4tv2" podStartSLOduration=2.333892401 podStartE2EDuration="4.779765429s" podCreationTimestamp="2026-01-22 16:59:37 +0000 UTC" firstStartedPulling="2026-01-22 16:59:38.684750648 +0000 UTC m=+1871.329297358" lastFinishedPulling="2026-01-22 16:59:41.130623666 +0000 UTC m=+1873.775170386" observedRunningTime="2026-01-22 16:59:41.774943163 +0000 UTC m=+1874.419489873" watchObservedRunningTime="2026-01-22 16:59:41.779765429 +0000 UTC m=+1874.424312149" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.187170 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.191628 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.290049 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk5w5\" (UniqueName: \"kubernetes.io/projected/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-kube-api-access-kk5w5\") pod \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.290098 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb42bced-7bce-43db-8cd9-efa728c629a4-operator-scripts\") pod \"bb42bced-7bce-43db-8cd9-efa728c629a4\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.290228 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9xb6\" (UniqueName: \"kubernetes.io/projected/bb42bced-7bce-43db-8cd9-efa728c629a4-kube-api-access-j9xb6\") pod \"bb42bced-7bce-43db-8cd9-efa728c629a4\" (UID: \"bb42bced-7bce-43db-8cd9-efa728c629a4\") " Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.290262 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-operator-scripts\") pod \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\" (UID: \"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7\") " Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.291179 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dedfbfe9-091a-4b70-b6fe-e24214f2bbe7" (UID: "dedfbfe9-091a-4b70-b6fe-e24214f2bbe7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.291186 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb42bced-7bce-43db-8cd9-efa728c629a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb42bced-7bce-43db-8cd9-efa728c629a4" (UID: "bb42bced-7bce-43db-8cd9-efa728c629a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.295705 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-kube-api-access-kk5w5" (OuterVolumeSpecName: "kube-api-access-kk5w5") pod "dedfbfe9-091a-4b70-b6fe-e24214f2bbe7" (UID: "dedfbfe9-091a-4b70-b6fe-e24214f2bbe7"). InnerVolumeSpecName "kube-api-access-kk5w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.299858 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb42bced-7bce-43db-8cd9-efa728c629a4-kube-api-access-j9xb6" (OuterVolumeSpecName: "kube-api-access-j9xb6") pod "bb42bced-7bce-43db-8cd9-efa728c629a4" (UID: "bb42bced-7bce-43db-8cd9-efa728c629a4"). InnerVolumeSpecName "kube-api-access-j9xb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.392267 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9xb6\" (UniqueName: \"kubernetes.io/projected/bb42bced-7bce-43db-8cd9-efa728c629a4-kube-api-access-j9xb6\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.392308 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.392320 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk5w5\" (UniqueName: \"kubernetes.io/projected/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7-kube-api-access-kk5w5\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.392330 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb42bced-7bce-43db-8cd9-efa728c629a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.743901 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-w4tbs" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.744092 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-w4tbs" event={"ID":"dedfbfe9-091a-4b70-b6fe-e24214f2bbe7","Type":"ContainerDied","Data":"e958eae274db6fcfb8957dfca538f8e94ef51e2509344e5c1e13134f40bce57b"} Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.744635 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e958eae274db6fcfb8957dfca538f8e94ef51e2509344e5c1e13134f40bce57b" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.751913 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.751930 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2" event={"ID":"bb42bced-7bce-43db-8cd9-efa728c629a4","Type":"ContainerDied","Data":"92aa449fc0cb2151c1a523231852a15431dd8f57ff0f3cfa5c997f6e3374736e"} Jan 22 16:59:42 crc kubenswrapper[4704]: I0122 16:59:42.751977 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92aa449fc0cb2151c1a523231852a15431dd8f57ff0f3cfa5c997f6e3374736e" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.473914 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bxv45"] Jan 22 16:59:44 crc kubenswrapper[4704]: E0122 16:59:44.474518 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedfbfe9-091a-4b70-b6fe-e24214f2bbe7" containerName="mariadb-database-create" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.474531 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedfbfe9-091a-4b70-b6fe-e24214f2bbe7" containerName="mariadb-database-create" Jan 22 16:59:44 crc kubenswrapper[4704]: E0122 16:59:44.474547 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb42bced-7bce-43db-8cd9-efa728c629a4" containerName="mariadb-account-create-update" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.474553 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb42bced-7bce-43db-8cd9-efa728c629a4" containerName="mariadb-account-create-update" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.474687 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedfbfe9-091a-4b70-b6fe-e24214f2bbe7" containerName="mariadb-database-create" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.474702 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb42bced-7bce-43db-8cd9-efa728c629a4" containerName="mariadb-account-create-update" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.475317 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.477354 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9t2nf" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.477565 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.494058 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bxv45"] Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.633557 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.633606 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w4fn\" (UniqueName: \"kubernetes.io/projected/df936479-fdcd-4406-a4bb-dd252552db0f-kube-api-access-6w4fn\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.633716 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.633743 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-config-data\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.735182 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.735239 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-config-data\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.735356 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.735391 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w4fn\" (UniqueName: \"kubernetes.io/projected/df936479-fdcd-4406-a4bb-dd252552db0f-kube-api-access-6w4fn\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.741579 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-config-data\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.743423 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.745815 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.765685 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w4fn\" (UniqueName: \"kubernetes.io/projected/df936479-fdcd-4406-a4bb-dd252552db0f-kube-api-access-6w4fn\") pod \"watcher-kuttl-db-sync-bxv45\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:44 crc kubenswrapper[4704]: I0122 16:59:44.801427 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:45 crc kubenswrapper[4704]: I0122 16:59:45.245099 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bxv45"] Jan 22 16:59:45 crc kubenswrapper[4704]: W0122 16:59:45.246554 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf936479_fdcd_4406_a4bb_dd252552db0f.slice/crio-0130cef24ceac5339664884cbba340d21aea6b9d5704b09cd14b52b5a5230d6d WatchSource:0}: Error finding container 0130cef24ceac5339664884cbba340d21aea6b9d5704b09cd14b52b5a5230d6d: Status 404 returned error can't find the container with id 0130cef24ceac5339664884cbba340d21aea6b9d5704b09cd14b52b5a5230d6d Jan 22 16:59:45 crc kubenswrapper[4704]: I0122 16:59:45.783689 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" event={"ID":"df936479-fdcd-4406-a4bb-dd252552db0f","Type":"ContainerStarted","Data":"61e13668809eb9fe61020d6754a250461e4c2ce83cf8cad4636772bed90b46cf"} Jan 22 16:59:45 crc kubenswrapper[4704]: I0122 16:59:45.783756 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" event={"ID":"df936479-fdcd-4406-a4bb-dd252552db0f","Type":"ContainerStarted","Data":"0130cef24ceac5339664884cbba340d21aea6b9d5704b09cd14b52b5a5230d6d"} Jan 22 16:59:45 crc kubenswrapper[4704]: I0122 16:59:45.810078 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" podStartSLOduration=1.810054113 podStartE2EDuration="1.810054113s" podCreationTimestamp="2026-01-22 16:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:59:45.802049786 +0000 UTC m=+1878.446596496" watchObservedRunningTime="2026-01-22 16:59:45.810054113 +0000 UTC m=+1878.454600823" Jan 22 16:59:47 crc kubenswrapper[4704]: I0122 16:59:47.396449 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:47 crc kubenswrapper[4704]: I0122 16:59:47.396853 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:47 crc kubenswrapper[4704]: I0122 16:59:47.475416 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:47 crc kubenswrapper[4704]: I0122 16:59:47.847852 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:48 crc kubenswrapper[4704]: I0122 16:59:48.807743 4704 generic.go:334] "Generic (PLEG): container finished" podID="df936479-fdcd-4406-a4bb-dd252552db0f" containerID="61e13668809eb9fe61020d6754a250461e4c2ce83cf8cad4636772bed90b46cf" exitCode=0 Jan 22 16:59:48 crc kubenswrapper[4704]: I0122 16:59:48.808723 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" event={"ID":"df936479-fdcd-4406-a4bb-dd252552db0f","Type":"ContainerDied","Data":"61e13668809eb9fe61020d6754a250461e4c2ce83cf8cad4636772bed90b46cf"} Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.124542 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.223872 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-db-sync-config-data\") pod \"df936479-fdcd-4406-a4bb-dd252552db0f\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.224019 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-config-data\") pod \"df936479-fdcd-4406-a4bb-dd252552db0f\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.224104 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-combined-ca-bundle\") pod \"df936479-fdcd-4406-a4bb-dd252552db0f\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.224164 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w4fn\" (UniqueName: \"kubernetes.io/projected/df936479-fdcd-4406-a4bb-dd252552db0f-kube-api-access-6w4fn\") pod \"df936479-fdcd-4406-a4bb-dd252552db0f\" (UID: \"df936479-fdcd-4406-a4bb-dd252552db0f\") " Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.230005 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df936479-fdcd-4406-a4bb-dd252552db0f" (UID: "df936479-fdcd-4406-a4bb-dd252552db0f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.230136 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df936479-fdcd-4406-a4bb-dd252552db0f-kube-api-access-6w4fn" (OuterVolumeSpecName: "kube-api-access-6w4fn") pod "df936479-fdcd-4406-a4bb-dd252552db0f" (UID: "df936479-fdcd-4406-a4bb-dd252552db0f"). InnerVolumeSpecName "kube-api-access-6w4fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.249289 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df936479-fdcd-4406-a4bb-dd252552db0f" (UID: "df936479-fdcd-4406-a4bb-dd252552db0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.264940 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-config-data" (OuterVolumeSpecName: "config-data") pod "df936479-fdcd-4406-a4bb-dd252552db0f" (UID: "df936479-fdcd-4406-a4bb-dd252552db0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.325723 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.325758 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.325767 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df936479-fdcd-4406-a4bb-dd252552db0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.325776 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w4fn\" (UniqueName: \"kubernetes.io/projected/df936479-fdcd-4406-a4bb-dd252552db0f-kube-api-access-6w4fn\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.824046 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" event={"ID":"df936479-fdcd-4406-a4bb-dd252552db0f","Type":"ContainerDied","Data":"0130cef24ceac5339664884cbba340d21aea6b9d5704b09cd14b52b5a5230d6d"} Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.824430 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0130cef24ceac5339664884cbba340d21aea6b9d5704b09cd14b52b5a5230d6d" Jan 22 16:59:50 crc kubenswrapper[4704]: I0122 16:59:50.824345 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bxv45" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.056844 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4tv2"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.057140 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g4tv2" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="registry-server" containerID="cri-o://7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342" gracePeriod=2 Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.141495 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:59:51 crc kubenswrapper[4704]: E0122 16:59:51.141913 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df936479-fdcd-4406-a4bb-dd252552db0f" containerName="watcher-kuttl-db-sync" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.141925 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="df936479-fdcd-4406-a4bb-dd252552db0f" containerName="watcher-kuttl-db-sync" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.142072 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="df936479-fdcd-4406-a4bb-dd252552db0f" containerName="watcher-kuttl-db-sync" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.142606 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.148721 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.148731 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9t2nf" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.162314 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.227353 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.228631 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.231726 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.237781 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.240127 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.240195 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjtdn\" (UniqueName: \"kubernetes.io/projected/08fe61f0-464a-41cd-a81e-510d187bbe10-kube-api-access-vjtdn\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.240310 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.240339 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.240376 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fe61f0-464a-41cd-a81e-510d187bbe10-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.247950 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.249175 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.255183 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.259334 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.341935 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wnfr\" (UniqueName: \"kubernetes.io/projected/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-kube-api-access-2wnfr\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.341995 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342038 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342064 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85vc\" (UniqueName: \"kubernetes.io/projected/2fa598a4-a571-48d9-919a-77d7f41fd15a-kube-api-access-f85vc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342086 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342127 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342145 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342165 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342208 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342235 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342254 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342288 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fe61f0-464a-41cd-a81e-510d187bbe10-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342320 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342367 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342392 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjtdn\" (UniqueName: \"kubernetes.io/projected/08fe61f0-464a-41cd-a81e-510d187bbe10-kube-api-access-vjtdn\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342421 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa598a4-a571-48d9-919a-77d7f41fd15a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.342453 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.343895 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fe61f0-464a-41cd-a81e-510d187bbe10-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.348590 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.348853 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.349427 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.366166 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjtdn\" (UniqueName: \"kubernetes.io/projected/08fe61f0-464a-41cd-a81e-510d187bbe10-kube-api-access-vjtdn\") pod \"watcher-kuttl-applier-0\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.443741 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.443881 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.443937 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.443983 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa598a4-a571-48d9-919a-77d7f41fd15a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444001 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444021 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wnfr\" (UniqueName: \"kubernetes.io/projected/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-kube-api-access-2wnfr\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444042 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444067 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444093 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f85vc\" (UniqueName: \"kubernetes.io/projected/2fa598a4-a571-48d9-919a-77d7f41fd15a-kube-api-access-f85vc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444115 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444137 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.444155 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.448400 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.448484 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa598a4-a571-48d9-919a-77d7f41fd15a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.449902 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.451674 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.451945 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.452533 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.453124 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.455387 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.456305 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.461488 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.465084 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f85vc\" (UniqueName: \"kubernetes.io/projected/2fa598a4-a571-48d9-919a-77d7f41fd15a-kube-api-access-f85vc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.478491 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wnfr\" (UniqueName: \"kubernetes.io/projected/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-kube-api-access-2wnfr\") pod \"watcher-kuttl-api-0\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.532482 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.535346 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.552773 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.569594 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.649355 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wz8r\" (UniqueName: \"kubernetes.io/projected/a317b8ac-2de4-4e21-b74f-14690e86be56-kube-api-access-9wz8r\") pod \"a317b8ac-2de4-4e21-b74f-14690e86be56\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.649509 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-utilities\") pod \"a317b8ac-2de4-4e21-b74f-14690e86be56\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.649624 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-catalog-content\") pod \"a317b8ac-2de4-4e21-b74f-14690e86be56\" (UID: \"a317b8ac-2de4-4e21-b74f-14690e86be56\") " Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.652678 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-utilities" (OuterVolumeSpecName: "utilities") pod "a317b8ac-2de4-4e21-b74f-14690e86be56" (UID: "a317b8ac-2de4-4e21-b74f-14690e86be56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.658992 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a317b8ac-2de4-4e21-b74f-14690e86be56-kube-api-access-9wz8r" (OuterVolumeSpecName: "kube-api-access-9wz8r") pod "a317b8ac-2de4-4e21-b74f-14690e86be56" (UID: "a317b8ac-2de4-4e21-b74f-14690e86be56"). InnerVolumeSpecName "kube-api-access-9wz8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.755149 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.755195 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wz8r\" (UniqueName: \"kubernetes.io/projected/a317b8ac-2de4-4e21-b74f-14690e86be56-kube-api-access-9wz8r\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.795326 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a317b8ac-2de4-4e21-b74f-14690e86be56" (UID: "a317b8ac-2de4-4e21-b74f-14690e86be56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.859401 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a317b8ac-2de4-4e21-b74f-14690e86be56-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.874979 4704 generic.go:334] "Generic (PLEG): container finished" podID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerID="7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342" exitCode=0 Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.875031 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4tv2" event={"ID":"a317b8ac-2de4-4e21-b74f-14690e86be56","Type":"ContainerDied","Data":"7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342"} Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.875064 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g4tv2" event={"ID":"a317b8ac-2de4-4e21-b74f-14690e86be56","Type":"ContainerDied","Data":"90d7486fcd2865cec0350850a4146322f1ac709295c5025e414ddd9792b32eb9"} Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.875085 4704 scope.go:117] "RemoveContainer" containerID="7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.875154 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g4tv2" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.926868 4704 scope.go:117] "RemoveContainer" containerID="a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c" Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.956283 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g4tv2"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.964084 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g4tv2"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.978282 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 16:59:51 crc kubenswrapper[4704]: I0122 16:59:51.987089 4704 scope.go:117] "RemoveContainer" containerID="cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.007190 4704 scope.go:117] "RemoveContainer" containerID="7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342" Jan 22 16:59:52 crc kubenswrapper[4704]: E0122 16:59:52.007844 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342\": container with ID starting with 7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342 not found: ID does not exist" containerID="7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.007898 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342"} err="failed to get container status \"7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342\": rpc error: code = NotFound desc = could not find container \"7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342\": container with ID starting with 7b86978fd19b37ab7605b25f2934c5d69e5bf9a324b7d3b0ca0f258435a26342 not found: ID does not exist" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.007932 4704 scope.go:117] "RemoveContainer" containerID="a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c" Jan 22 16:59:52 crc kubenswrapper[4704]: E0122 16:59:52.008368 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c\": container with ID starting with a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c not found: ID does not exist" containerID="a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.008696 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c"} err="failed to get container status \"a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c\": rpc error: code = NotFound desc = could not find container \"a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c\": container with ID starting with a5774259f94c28ccef0c4c6405a302960333ea26bfc4cbccc72720137030783c not found: ID does not exist" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.008717 4704 scope.go:117] "RemoveContainer" containerID="cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e" Jan 22 16:59:52 crc kubenswrapper[4704]: E0122 16:59:52.009053 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e\": container with ID starting with cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e not found: ID does not exist" containerID="cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.009217 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e"} err="failed to get container status \"cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e\": rpc error: code = NotFound desc = could not find container \"cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e\": container with ID starting with cc1408655d7b3e026762e9f16ab0fe8e9f7cb813e95c16c2dd2f57fa2db6d56e not found: ID does not exist" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.324343 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 16:59:52 crc kubenswrapper[4704]: W0122 16:59:52.335586 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec82b856_4b7d_4f89_9a0b_dc76f23b3089.slice/crio-cff6fc9282d7daec3ad6b4cdef7c2b8b00d3faa22cc3272fdd6a3beb43b53a8b WatchSource:0}: Error finding container cff6fc9282d7daec3ad6b4cdef7c2b8b00d3faa22cc3272fdd6a3beb43b53a8b: Status 404 returned error can't find the container with id cff6fc9282d7daec3ad6b4cdef7c2b8b00d3faa22cc3272fdd6a3beb43b53a8b Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.342856 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.884126 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ec82b856-4b7d-4f89-9a0b-dc76f23b3089","Type":"ContainerStarted","Data":"8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428"} Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.884710 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ec82b856-4b7d-4f89-9a0b-dc76f23b3089","Type":"ContainerStarted","Data":"56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8"} Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.884723 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ec82b856-4b7d-4f89-9a0b-dc76f23b3089","Type":"ContainerStarted","Data":"cff6fc9282d7daec3ad6b4cdef7c2b8b00d3faa22cc3272fdd6a3beb43b53a8b"} Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.884739 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.891332 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"08fe61f0-464a-41cd-a81e-510d187bbe10","Type":"ContainerStarted","Data":"17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b"} Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.891398 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"08fe61f0-464a-41cd-a81e-510d187bbe10","Type":"ContainerStarted","Data":"7a4cbc6d3b6b5488899c02bdcd3bf70ce27e714a1365e78342c2aa9eff322c1e"} Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.894000 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2fa598a4-a571-48d9-919a-77d7f41fd15a","Type":"ContainerStarted","Data":"cf5c4ba114a35e2d28101af544c9874b684f81e05fa92caa18ecdcf3b61177b6"} Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.894169 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2fa598a4-a571-48d9-919a-77d7f41fd15a","Type":"ContainerStarted","Data":"cb71beee8497a704fe2d549dc6d50999e74f0042e4f04389faf7dbd902bbfba0"} Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.910138 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.9101186430000001 podStartE2EDuration="1.910118643s" podCreationTimestamp="2026-01-22 16:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:59:52.904664419 +0000 UTC m=+1885.549211139" watchObservedRunningTime="2026-01-22 16:59:52.910118643 +0000 UTC m=+1885.554665343" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.934934 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.934914056 podStartE2EDuration="1.934914056s" podCreationTimestamp="2026-01-22 16:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:59:52.928611457 +0000 UTC m=+1885.573158177" watchObservedRunningTime="2026-01-22 16:59:52.934914056 +0000 UTC m=+1885.579460776" Jan 22 16:59:52 crc kubenswrapper[4704]: I0122 16:59:52.953296 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.953275496 podStartE2EDuration="1.953275496s" podCreationTimestamp="2026-01-22 16:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 16:59:52.946381301 +0000 UTC m=+1885.590928001" watchObservedRunningTime="2026-01-22 16:59:52.953275496 +0000 UTC m=+1885.597822196" Jan 22 16:59:53 crc kubenswrapper[4704]: I0122 16:59:53.643894 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" path="/var/lib/kubelet/pods/a317b8ac-2de4-4e21-b74f-14690e86be56/volumes" Jan 22 16:59:55 crc kubenswrapper[4704]: I0122 16:59:55.033913 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 16:59:56 crc kubenswrapper[4704]: I0122 16:59:56.534282 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 16:59:56 crc kubenswrapper[4704]: I0122 16:59:56.556988 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.137291 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4"] Jan 22 17:00:00 crc kubenswrapper[4704]: E0122 17:00:00.138296 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="registry-server" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.138316 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="registry-server" Jan 22 17:00:00 crc kubenswrapper[4704]: E0122 17:00:00.138329 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="extract-utilities" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.138340 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="extract-utilities" Jan 22 17:00:00 crc kubenswrapper[4704]: E0122 17:00:00.138366 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="extract-content" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.138376 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="extract-content" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.138567 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a317b8ac-2de4-4e21-b74f-14690e86be56" containerName="registry-server" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.139369 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.158074 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.158609 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4"] Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.158890 4704 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.306240 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-secret-volume\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.306695 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-config-volume\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.306890 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hgrg\" (UniqueName: \"kubernetes.io/projected/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-kube-api-access-4hgrg\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.408644 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hgrg\" (UniqueName: \"kubernetes.io/projected/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-kube-api-access-4hgrg\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.408777 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-secret-volume\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.408910 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-config-volume\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.410081 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-config-volume\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.416032 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-secret-volume\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.440829 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hgrg\" (UniqueName: \"kubernetes.io/projected/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-kube-api-access-4hgrg\") pod \"collect-profiles-29485020-9gkp4\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.459595 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.915411 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4"] Jan 22 17:00:00 crc kubenswrapper[4704]: I0122 17:00:00.985542 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" event={"ID":"24b40bf2-8377-4b54-b9c9-b21c1ce876bd","Type":"ContainerStarted","Data":"c62293887bed03100a38cf1819c1f77bc44cab4a25ddbac294eeffd466a65bd1"} Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.534291 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.556004 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.561721 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.562308 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.571331 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.599593 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.993863 4704 generic.go:334] "Generic (PLEG): container finished" podID="24b40bf2-8377-4b54-b9c9-b21c1ce876bd" containerID="06abbd05018683ba96a0a0edcb15f2156532843274dec50894e35ac98c3b57c6" exitCode=0 Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.993905 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" event={"ID":"24b40bf2-8377-4b54-b9c9-b21c1ce876bd","Type":"ContainerDied","Data":"06abbd05018683ba96a0a0edcb15f2156532843274dec50894e35ac98c3b57c6"} Jan 22 17:00:01 crc kubenswrapper[4704]: I0122 17:00:01.994551 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:02 crc kubenswrapper[4704]: I0122 17:00:02.004254 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:02 crc kubenswrapper[4704]: I0122 17:00:02.021395 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:02 crc kubenswrapper[4704]: I0122 17:00:02.021445 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.303432 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.340331 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.340848 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-central-agent" containerID="cri-o://a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e" gracePeriod=30 Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.341016 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="sg-core" containerID="cri-o://52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614" gracePeriod=30 Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.341057 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-notification-agent" containerID="cri-o://3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480" gracePeriod=30 Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.341218 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="proxy-httpd" containerID="cri-o://daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f" gracePeriod=30 Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.448316 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-config-volume\") pod \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.448581 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hgrg\" (UniqueName: \"kubernetes.io/projected/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-kube-api-access-4hgrg\") pod \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.448672 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-secret-volume\") pod \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\" (UID: \"24b40bf2-8377-4b54-b9c9-b21c1ce876bd\") " Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.450414 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "24b40bf2-8377-4b54-b9c9-b21c1ce876bd" (UID: "24b40bf2-8377-4b54-b9c9-b21c1ce876bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.454033 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24b40bf2-8377-4b54-b9c9-b21c1ce876bd" (UID: "24b40bf2-8377-4b54-b9c9-b21c1ce876bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.457978 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-kube-api-access-4hgrg" (OuterVolumeSpecName: "kube-api-access-4hgrg") pod "24b40bf2-8377-4b54-b9c9-b21c1ce876bd" (UID: "24b40bf2-8377-4b54-b9c9-b21c1ce876bd"). InnerVolumeSpecName "kube-api-access-4hgrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.573384 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hgrg\" (UniqueName: \"kubernetes.io/projected/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-kube-api-access-4hgrg\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.573636 4704 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:03 crc kubenswrapper[4704]: I0122 17:00:03.573646 4704 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b40bf2-8377-4b54-b9c9-b21c1ce876bd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.030083 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.030078 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485020-9gkp4" event={"ID":"24b40bf2-8377-4b54-b9c9-b21c1ce876bd","Type":"ContainerDied","Data":"c62293887bed03100a38cf1819c1f77bc44cab4a25ddbac294eeffd466a65bd1"} Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.030284 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62293887bed03100a38cf1819c1f77bc44cab4a25ddbac294eeffd466a65bd1" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.032885 4704 generic.go:334] "Generic (PLEG): container finished" podID="c85e979b-2349-4140-a9b7-295eff282279" containerID="daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f" exitCode=0 Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.032916 4704 generic.go:334] "Generic (PLEG): container finished" podID="c85e979b-2349-4140-a9b7-295eff282279" containerID="52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614" exitCode=2 Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.032926 4704 generic.go:334] "Generic (PLEG): container finished" podID="c85e979b-2349-4140-a9b7-295eff282279" containerID="a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e" exitCode=0 Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.032970 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerDied","Data":"daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f"} Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.033014 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerDied","Data":"52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614"} Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.033026 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerDied","Data":"a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e"} Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.383939 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.167:3000/\": dial tcp 10.217.0.167:3000: connect: connection refused" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.474913 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bxv45"] Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.488655 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bxv45"] Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.509035 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcheraeae-account-delete-z26rj"] Jan 22 17:00:04 crc kubenswrapper[4704]: E0122 17:00:04.509347 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b40bf2-8377-4b54-b9c9-b21c1ce876bd" containerName="collect-profiles" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.509363 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b40bf2-8377-4b54-b9c9-b21c1ce876bd" containerName="collect-profiles" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.509513 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b40bf2-8377-4b54-b9c9-b21c1ce876bd" containerName="collect-profiles" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.510055 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.529260 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcheraeae-account-delete-z26rj"] Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.565514 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.589448 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77jj\" (UniqueName: \"kubernetes.io/projected/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-kube-api-access-v77jj\") pod \"watcheraeae-account-delete-z26rj\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.589494 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-operator-scripts\") pod \"watcheraeae-account-delete-z26rj\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.620485 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.620751 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-kuttl-api-log" containerID="cri-o://56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8" gracePeriod=30 Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.620847 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-api" containerID="cri-o://8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428" gracePeriod=30 Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.689767 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.689982 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="08fe61f0-464a-41cd-a81e-510d187bbe10" containerName="watcher-applier" containerID="cri-o://17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" gracePeriod=30 Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.691603 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77jj\" (UniqueName: \"kubernetes.io/projected/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-kube-api-access-v77jj\") pod \"watcheraeae-account-delete-z26rj\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.691668 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-operator-scripts\") pod \"watcheraeae-account-delete-z26rj\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.693490 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-operator-scripts\") pod \"watcheraeae-account-delete-z26rj\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.729848 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77jj\" (UniqueName: \"kubernetes.io/projected/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-kube-api-access-v77jj\") pod \"watcheraeae-account-delete-z26rj\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:04 crc kubenswrapper[4704]: I0122 17:00:04.823106 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:05 crc kubenswrapper[4704]: I0122 17:00:05.057447 4704 generic.go:334] "Generic (PLEG): container finished" podID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerID="56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8" exitCode=143 Jan 22 17:00:05 crc kubenswrapper[4704]: I0122 17:00:05.057715 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ec82b856-4b7d-4f89-9a0b-dc76f23b3089","Type":"ContainerDied","Data":"56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8"} Jan 22 17:00:05 crc kubenswrapper[4704]: I0122 17:00:05.057822 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="2fa598a4-a571-48d9-919a-77d7f41fd15a" containerName="watcher-decision-engine" containerID="cri-o://cf5c4ba114a35e2d28101af544c9874b684f81e05fa92caa18ecdcf3b61177b6" gracePeriod=30 Jan 22 17:00:05 crc kubenswrapper[4704]: I0122 17:00:05.298980 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcheraeae-account-delete-z26rj"] Jan 22 17:00:05 crc kubenswrapper[4704]: W0122 17:00:05.307195 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda651df8b_eab7_4ba2_9fcd_ac6a87b69548.slice/crio-3c12dd35251fc2dbc4e4262a2d3846c34fe730595cdd43ed4e9da3b09e4afc40 WatchSource:0}: Error finding container 3c12dd35251fc2dbc4e4262a2d3846c34fe730595cdd43ed4e9da3b09e4afc40: Status 404 returned error can't find the container with id 3c12dd35251fc2dbc4e4262a2d3846c34fe730595cdd43ed4e9da3b09e4afc40 Jan 22 17:00:05 crc kubenswrapper[4704]: I0122 17:00:05.655496 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df936479-fdcd-4406-a4bb-dd252552db0f" path="/var/lib/kubelet/pods/df936479-fdcd-4406-a4bb-dd252552db0f/volumes" Jan 22 17:00:05 crc kubenswrapper[4704]: I0122 17:00:05.934306 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:05 crc kubenswrapper[4704]: I0122 17:00:05.937118 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.069810 4704 generic.go:334] "Generic (PLEG): container finished" podID="a651df8b-eab7-4ba2-9fcd-ac6a87b69548" containerID="de197db95e4be9977e67d7e90e976193a01340422df2f08260af5f191f1de523" exitCode=0 Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.069894 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" event={"ID":"a651df8b-eab7-4ba2-9fcd-ac6a87b69548","Type":"ContainerDied","Data":"de197db95e4be9977e67d7e90e976193a01340422df2f08260af5f191f1de523"} Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.069920 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" event={"ID":"a651df8b-eab7-4ba2-9fcd-ac6a87b69548","Type":"ContainerStarted","Data":"3c12dd35251fc2dbc4e4262a2d3846c34fe730595cdd43ed4e9da3b09e4afc40"} Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.084366 4704 generic.go:334] "Generic (PLEG): container finished" podID="c85e979b-2349-4140-a9b7-295eff282279" containerID="3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480" exitCode=0 Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.084470 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerDied","Data":"3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480"} Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.084501 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c85e979b-2349-4140-a9b7-295eff282279","Type":"ContainerDied","Data":"66a8d3662303723ed0f50647090183e807917cd5678a5c1fc36ef8bd9066f21e"} Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.084517 4704 scope.go:117] "RemoveContainer" containerID="daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.084650 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.088450 4704 generic.go:334] "Generic (PLEG): container finished" podID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerID="8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428" exitCode=0 Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.088528 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ec82b856-4b7d-4f89-9a0b-dc76f23b3089","Type":"ContainerDied","Data":"8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428"} Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.088554 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ec82b856-4b7d-4f89-9a0b-dc76f23b3089","Type":"ContainerDied","Data":"cff6fc9282d7daec3ad6b4cdef7c2b8b00d3faa22cc3272fdd6a3beb43b53a8b"} Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.088638 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.109664 4704 scope.go:117] "RemoveContainer" containerID="52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112030 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-config-data\") pod \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112086 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-cert-memcached-mtls\") pod \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112126 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-log-httpd\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112168 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-custom-prometheus-ca\") pod \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112204 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-logs\") pod \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112264 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-sg-core-conf-yaml\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112304 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-run-httpd\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112321 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wnfr\" (UniqueName: \"kubernetes.io/projected/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-kube-api-access-2wnfr\") pod \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112337 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-combined-ca-bundle\") pod \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\" (UID: \"ec82b856-4b7d-4f89-9a0b-dc76f23b3089\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112357 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-combined-ca-bundle\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112398 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-config-data\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112418 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-ceilometer-tls-certs\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112439 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-scripts\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.112473 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzd5p\" (UniqueName: \"kubernetes.io/projected/c85e979b-2349-4140-a9b7-295eff282279-kube-api-access-bzd5p\") pod \"c85e979b-2349-4140-a9b7-295eff282279\" (UID: \"c85e979b-2349-4140-a9b7-295eff282279\") " Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.113277 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.117258 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85e979b-2349-4140-a9b7-295eff282279-kube-api-access-bzd5p" (OuterVolumeSpecName: "kube-api-access-bzd5p") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "kube-api-access-bzd5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.121787 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-logs" (OuterVolumeSpecName: "logs") pod "ec82b856-4b7d-4f89-9a0b-dc76f23b3089" (UID: "ec82b856-4b7d-4f89-9a0b-dc76f23b3089"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.122839 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.124227 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-scripts" (OuterVolumeSpecName: "scripts") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.129875 4704 scope.go:117] "RemoveContainer" containerID="3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.135480 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-kube-api-access-2wnfr" (OuterVolumeSpecName: "kube-api-access-2wnfr") pod "ec82b856-4b7d-4f89-9a0b-dc76f23b3089" (UID: "ec82b856-4b7d-4f89-9a0b-dc76f23b3089"). InnerVolumeSpecName "kube-api-access-2wnfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.159857 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.161551 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ec82b856-4b7d-4f89-9a0b-dc76f23b3089" (UID: "ec82b856-4b7d-4f89-9a0b-dc76f23b3089"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.163232 4704 scope.go:117] "RemoveContainer" containerID="a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.173289 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec82b856-4b7d-4f89-9a0b-dc76f23b3089" (UID: "ec82b856-4b7d-4f89-9a0b-dc76f23b3089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.181203 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.187744 4704 scope.go:117] "RemoveContainer" containerID="daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.189597 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f\": container with ID starting with daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f not found: ID does not exist" containerID="daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.189648 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f"} err="failed to get container status \"daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f\": rpc error: code = NotFound desc = could not find container \"daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f\": container with ID starting with daaae042ce585e8f779fc91a3fd227eabeae5cb300957077e592309c56eca41f not found: ID does not exist" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.189676 4704 scope.go:117] "RemoveContainer" containerID="52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.189911 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614\": container with ID starting with 52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614 not found: ID does not exist" containerID="52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.189942 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614"} err="failed to get container status \"52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614\": rpc error: code = NotFound desc = could not find container \"52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614\": container with ID starting with 52a0cbd90abf566e83cadfeeedc6bf73ca7de5006ebf52ecb4012d12dafec614 not found: ID does not exist" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.189961 4704 scope.go:117] "RemoveContainer" containerID="3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.190325 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480\": container with ID starting with 3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480 not found: ID does not exist" containerID="3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.190359 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480"} err="failed to get container status \"3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480\": rpc error: code = NotFound desc = could not find container \"3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480\": container with ID starting with 3a53655926a492c8342b26b1e34234eee350e48846dc7a92656d32745e5f1480 not found: ID does not exist" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.190379 4704 scope.go:117] "RemoveContainer" containerID="a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.190658 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e\": container with ID starting with a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e not found: ID does not exist" containerID="a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.190692 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e"} err="failed to get container status \"a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e\": rpc error: code = NotFound desc = could not find container \"a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e\": container with ID starting with a447ec2134f49c91aacffd3cf0b7a59ee9bdfbda0416a317d296261dde0dea5e not found: ID does not exist" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.190705 4704 scope.go:117] "RemoveContainer" containerID="8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.192348 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-config-data" (OuterVolumeSpecName: "config-data") pod "ec82b856-4b7d-4f89-9a0b-dc76f23b3089" (UID: "ec82b856-4b7d-4f89-9a0b-dc76f23b3089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214725 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214755 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214770 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wnfr\" (UniqueName: \"kubernetes.io/projected/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-kube-api-access-2wnfr\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214781 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214864 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214876 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214886 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214897 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzd5p\" (UniqueName: \"kubernetes.io/projected/c85e979b-2349-4140-a9b7-295eff282279-kube-api-access-bzd5p\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214907 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214917 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c85e979b-2349-4140-a9b7-295eff282279-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.214927 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.215045 4704 scope.go:117] "RemoveContainer" containerID="56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.230253 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ec82b856-4b7d-4f89-9a0b-dc76f23b3089" (UID: "ec82b856-4b7d-4f89-9a0b-dc76f23b3089"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.241387 4704 scope.go:117] "RemoveContainer" containerID="8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.242039 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428\": container with ID starting with 8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428 not found: ID does not exist" containerID="8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.242078 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428"} err="failed to get container status \"8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428\": rpc error: code = NotFound desc = could not find container \"8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428\": container with ID starting with 8881d47cca0be6d99d36362ec953250c703baf731a9a752e98866145f9d8e428 not found: ID does not exist" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.242106 4704 scope.go:117] "RemoveContainer" containerID="56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.242591 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8\": container with ID starting with 56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8 not found: ID does not exist" containerID="56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.242705 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8"} err="failed to get container status \"56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8\": rpc error: code = NotFound desc = could not find container \"56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8\": container with ID starting with 56a7deeab1579fb7505ea28b4a22b433d3cdee5404dfd48bec425111cfddb3d8 not found: ID does not exist" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.246520 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.262976 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-config-data" (OuterVolumeSpecName: "config-data") pod "c85e979b-2349-4140-a9b7-295eff282279" (UID: "c85e979b-2349-4140-a9b7-295eff282279"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.316549 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.316582 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85e979b-2349-4140-a9b7-295eff282279-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.316591 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ec82b856-4b7d-4f89-9a0b-dc76f23b3089-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.425823 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.442399 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.457068 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.469711 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.475905 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.476271 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="sg-core" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476291 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="sg-core" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.476314 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="proxy-httpd" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476322 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="proxy-httpd" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.476346 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-central-agent" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476353 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-central-agent" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.476364 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-kuttl-api-log" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476371 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-kuttl-api-log" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.476386 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-api" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476395 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-api" Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.476405 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-notification-agent" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476411 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-notification-agent" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476580 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-kuttl-api-log" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476592 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-notification-agent" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476603 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="proxy-httpd" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476617 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="sg-core" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476626 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85e979b-2349-4140-a9b7-295eff282279" containerName="ceilometer-central-agent" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.476640 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" containerName="watcher-api" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.478014 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.480385 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.480805 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.482214 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.497502 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.537905 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.560942 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.573914 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:00:06 crc kubenswrapper[4704]: E0122 17:00:06.573985 4704 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="08fe61f0-464a-41cd-a81e-510d187bbe10" containerName="watcher-applier" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.620779 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhc7\" (UniqueName: \"kubernetes.io/projected/63a91bb9-bd13-47c9-954b-b68c6482ea78-kube-api-access-vrhc7\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.620869 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-scripts\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.620895 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-log-httpd\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.620947 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.620979 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-run-httpd\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.621006 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.621051 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.621164 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-config-data\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722207 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722275 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-run-httpd\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722304 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722329 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722351 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-config-data\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722381 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhc7\" (UniqueName: \"kubernetes.io/projected/63a91bb9-bd13-47c9-954b-b68c6482ea78-kube-api-access-vrhc7\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722431 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-scripts\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722453 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-log-httpd\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.722984 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-log-httpd\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.726751 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.729530 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-config-data\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.729567 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-run-httpd\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.730285 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.735603 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-scripts\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.741949 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.758385 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhc7\" (UniqueName: \"kubernetes.io/projected/63a91bb9-bd13-47c9-954b-b68c6482ea78-kube-api-access-vrhc7\") pod \"ceilometer-0\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:06 crc kubenswrapper[4704]: I0122 17:00:06.794965 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.222963 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:07 crc kubenswrapper[4704]: W0122 17:00:07.234279 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63a91bb9_bd13_47c9_954b_b68c6482ea78.slice/crio-274aea33dd507d9a8e995010024c1d40bf47ee8e5632b9765bfe3d7343e66453 WatchSource:0}: Error finding container 274aea33dd507d9a8e995010024c1d40bf47ee8e5632b9765bfe3d7343e66453: Status 404 returned error can't find the container with id 274aea33dd507d9a8e995010024c1d40bf47ee8e5632b9765bfe3d7343e66453 Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.345967 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.406563 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.437518 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v77jj\" (UniqueName: \"kubernetes.io/projected/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-kube-api-access-v77jj\") pod \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.437843 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-operator-scripts\") pod \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\" (UID: \"a651df8b-eab7-4ba2-9fcd-ac6a87b69548\") " Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.438462 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a651df8b-eab7-4ba2-9fcd-ac6a87b69548" (UID: "a651df8b-eab7-4ba2-9fcd-ac6a87b69548"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.441955 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-kube-api-access-v77jj" (OuterVolumeSpecName: "kube-api-access-v77jj") pod "a651df8b-eab7-4ba2-9fcd-ac6a87b69548" (UID: "a651df8b-eab7-4ba2-9fcd-ac6a87b69548"). InnerVolumeSpecName "kube-api-access-v77jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.540197 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.540240 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v77jj\" (UniqueName: \"kubernetes.io/projected/a651df8b-eab7-4ba2-9fcd-ac6a87b69548-kube-api-access-v77jj\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.643749 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85e979b-2349-4140-a9b7-295eff282279" path="/var/lib/kubelet/pods/c85e979b-2349-4140-a9b7-295eff282279/volumes" Jan 22 17:00:07 crc kubenswrapper[4704]: I0122 17:00:07.644858 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec82b856-4b7d-4f89-9a0b-dc76f23b3089" path="/var/lib/kubelet/pods/ec82b856-4b7d-4f89-9a0b-dc76f23b3089/volumes" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.108514 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.108641 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcheraeae-account-delete-z26rj" event={"ID":"a651df8b-eab7-4ba2-9fcd-ac6a87b69548","Type":"ContainerDied","Data":"3c12dd35251fc2dbc4e4262a2d3846c34fe730595cdd43ed4e9da3b09e4afc40"} Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.109426 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c12dd35251fc2dbc4e4262a2d3846c34fe730595cdd43ed4e9da3b09e4afc40" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.110538 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerStarted","Data":"274aea33dd507d9a8e995010024c1d40bf47ee8e5632b9765bfe3d7343e66453"} Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.624851 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.764213 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-config-data\") pod \"08fe61f0-464a-41cd-a81e-510d187bbe10\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.764403 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-cert-memcached-mtls\") pod \"08fe61f0-464a-41cd-a81e-510d187bbe10\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.764430 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fe61f0-464a-41cd-a81e-510d187bbe10-logs\") pod \"08fe61f0-464a-41cd-a81e-510d187bbe10\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.764455 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-combined-ca-bundle\") pod \"08fe61f0-464a-41cd-a81e-510d187bbe10\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.764512 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjtdn\" (UniqueName: \"kubernetes.io/projected/08fe61f0-464a-41cd-a81e-510d187bbe10-kube-api-access-vjtdn\") pod \"08fe61f0-464a-41cd-a81e-510d187bbe10\" (UID: \"08fe61f0-464a-41cd-a81e-510d187bbe10\") " Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.765321 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fe61f0-464a-41cd-a81e-510d187bbe10-logs" (OuterVolumeSpecName: "logs") pod "08fe61f0-464a-41cd-a81e-510d187bbe10" (UID: "08fe61f0-464a-41cd-a81e-510d187bbe10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.772005 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fe61f0-464a-41cd-a81e-510d187bbe10-kube-api-access-vjtdn" (OuterVolumeSpecName: "kube-api-access-vjtdn") pod "08fe61f0-464a-41cd-a81e-510d187bbe10" (UID: "08fe61f0-464a-41cd-a81e-510d187bbe10"). InnerVolumeSpecName "kube-api-access-vjtdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.787989 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08fe61f0-464a-41cd-a81e-510d187bbe10" (UID: "08fe61f0-464a-41cd-a81e-510d187bbe10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.822481 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-config-data" (OuterVolumeSpecName: "config-data") pod "08fe61f0-464a-41cd-a81e-510d187bbe10" (UID: "08fe61f0-464a-41cd-a81e-510d187bbe10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.838952 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "08fe61f0-464a-41cd-a81e-510d187bbe10" (UID: "08fe61f0-464a-41cd-a81e-510d187bbe10"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.866485 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.866518 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fe61f0-464a-41cd-a81e-510d187bbe10-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.866527 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.866535 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjtdn\" (UniqueName: \"kubernetes.io/projected/08fe61f0-464a-41cd-a81e-510d187bbe10-kube-api-access-vjtdn\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:08 crc kubenswrapper[4704]: I0122 17:00:08.866545 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fe61f0-464a-41cd-a81e-510d187bbe10-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.124328 4704 generic.go:334] "Generic (PLEG): container finished" podID="08fe61f0-464a-41cd-a81e-510d187bbe10" containerID="17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" exitCode=0 Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.124782 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"08fe61f0-464a-41cd-a81e-510d187bbe10","Type":"ContainerDied","Data":"17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b"} Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.124832 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"08fe61f0-464a-41cd-a81e-510d187bbe10","Type":"ContainerDied","Data":"7a4cbc6d3b6b5488899c02bdcd3bf70ce27e714a1365e78342c2aa9eff322c1e"} Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.124852 4704 scope.go:117] "RemoveContainer" containerID="17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.124993 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.128438 4704 generic.go:334] "Generic (PLEG): container finished" podID="2fa598a4-a571-48d9-919a-77d7f41fd15a" containerID="cf5c4ba114a35e2d28101af544c9874b684f81e05fa92caa18ecdcf3b61177b6" exitCode=0 Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.128506 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2fa598a4-a571-48d9-919a-77d7f41fd15a","Type":"ContainerDied","Data":"cf5c4ba114a35e2d28101af544c9874b684f81e05fa92caa18ecdcf3b61177b6"} Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.130441 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerStarted","Data":"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794"} Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.130475 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerStarted","Data":"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f"} Jan 22 17:00:09 crc kubenswrapper[4704]: E0122 17:00:09.144714 4704 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fa598a4_a571_48d9_919a_77d7f41fd15a.slice/crio-cf5c4ba114a35e2d28101af544c9874b684f81e05fa92caa18ecdcf3b61177b6.scope\": RecentStats: unable to find data in memory cache]" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.159849 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.163346 4704 scope.go:117] "RemoveContainer" containerID="17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" Jan 22 17:00:09 crc kubenswrapper[4704]: E0122 17:00:09.163949 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b\": container with ID starting with 17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b not found: ID does not exist" containerID="17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.164002 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b"} err="failed to get container status \"17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b\": rpc error: code = NotFound desc = could not find container \"17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b\": container with ID starting with 17cd8ede80793ec3904c170a3a799d0e2494a3ce8a4516daf40a129716a1174b not found: ID does not exist" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.167139 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.414463 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.547587 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-w4tbs"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.561866 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-w4tbs"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.570665 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.575689 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-config-data\") pod \"2fa598a4-a571-48d9-919a-77d7f41fd15a\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.575766 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa598a4-a571-48d9-919a-77d7f41fd15a-logs\") pod \"2fa598a4-a571-48d9-919a-77d7f41fd15a\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.575844 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-cert-memcached-mtls\") pod \"2fa598a4-a571-48d9-919a-77d7f41fd15a\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.575874 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-custom-prometheus-ca\") pod \"2fa598a4-a571-48d9-919a-77d7f41fd15a\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.575935 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f85vc\" (UniqueName: \"kubernetes.io/projected/2fa598a4-a571-48d9-919a-77d7f41fd15a-kube-api-access-f85vc\") pod \"2fa598a4-a571-48d9-919a-77d7f41fd15a\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.575951 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-combined-ca-bundle\") pod \"2fa598a4-a571-48d9-919a-77d7f41fd15a\" (UID: \"2fa598a4-a571-48d9-919a-77d7f41fd15a\") " Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.578208 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa598a4-a571-48d9-919a-77d7f41fd15a-logs" (OuterVolumeSpecName: "logs") pod "2fa598a4-a571-48d9-919a-77d7f41fd15a" (UID: "2fa598a4-a571-48d9-919a-77d7f41fd15a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.580129 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcheraeae-account-delete-z26rj"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.582071 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa598a4-a571-48d9-919a-77d7f41fd15a-kube-api-access-f85vc" (OuterVolumeSpecName: "kube-api-access-f85vc") pod "2fa598a4-a571-48d9-919a-77d7f41fd15a" (UID: "2fa598a4-a571-48d9-919a-77d7f41fd15a"). InnerVolumeSpecName "kube-api-access-f85vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.586587 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-aeae-account-create-update-w9nl2"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.591904 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcheraeae-account-delete-z26rj"] Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.600596 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fa598a4-a571-48d9-919a-77d7f41fd15a" (UID: "2fa598a4-a571-48d9-919a-77d7f41fd15a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.608287 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2fa598a4-a571-48d9-919a-77d7f41fd15a" (UID: "2fa598a4-a571-48d9-919a-77d7f41fd15a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.615613 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-config-data" (OuterVolumeSpecName: "config-data") pod "2fa598a4-a571-48d9-919a-77d7f41fd15a" (UID: "2fa598a4-a571-48d9-919a-77d7f41fd15a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.632265 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2fa598a4-a571-48d9-919a-77d7f41fd15a" (UID: "2fa598a4-a571-48d9-919a-77d7f41fd15a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.645621 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fe61f0-464a-41cd-a81e-510d187bbe10" path="/var/lib/kubelet/pods/08fe61f0-464a-41cd-a81e-510d187bbe10/volumes" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.646116 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a651df8b-eab7-4ba2-9fcd-ac6a87b69548" path="/var/lib/kubelet/pods/a651df8b-eab7-4ba2-9fcd-ac6a87b69548/volumes" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.646588 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb42bced-7bce-43db-8cd9-efa728c629a4" path="/var/lib/kubelet/pods/bb42bced-7bce-43db-8cd9-efa728c629a4/volumes" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.647670 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedfbfe9-091a-4b70-b6fe-e24214f2bbe7" path="/var/lib/kubelet/pods/dedfbfe9-091a-4b70-b6fe-e24214f2bbe7/volumes" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.677965 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.677994 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.678005 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f85vc\" (UniqueName: \"kubernetes.io/projected/2fa598a4-a571-48d9-919a-77d7f41fd15a-kube-api-access-f85vc\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.678013 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.678021 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa598a4-a571-48d9-919a-77d7f41fd15a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:09 crc kubenswrapper[4704]: I0122 17:00:09.678029 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa598a4-a571-48d9-919a-77d7f41fd15a-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.138462 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerStarted","Data":"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907"} Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.141994 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"2fa598a4-a571-48d9-919a-77d7f41fd15a","Type":"ContainerDied","Data":"cb71beee8497a704fe2d549dc6d50999e74f0042e4f04389faf7dbd902bbfba0"} Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.142027 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.142282 4704 scope.go:117] "RemoveContainer" containerID="cf5c4ba114a35e2d28101af544c9874b684f81e05fa92caa18ecdcf3b61177b6" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.158018 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.166469 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.625209 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-zzt66"] Jan 22 17:00:10 crc kubenswrapper[4704]: E0122 17:00:10.625741 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fe61f0-464a-41cd-a81e-510d187bbe10" containerName="watcher-applier" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.625772 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fe61f0-464a-41cd-a81e-510d187bbe10" containerName="watcher-applier" Jan 22 17:00:10 crc kubenswrapper[4704]: E0122 17:00:10.625831 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa598a4-a571-48d9-919a-77d7f41fd15a" containerName="watcher-decision-engine" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.625847 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa598a4-a571-48d9-919a-77d7f41fd15a" containerName="watcher-decision-engine" Jan 22 17:00:10 crc kubenswrapper[4704]: E0122 17:00:10.625872 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a651df8b-eab7-4ba2-9fcd-ac6a87b69548" containerName="mariadb-account-delete" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.625885 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="a651df8b-eab7-4ba2-9fcd-ac6a87b69548" containerName="mariadb-account-delete" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.626177 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fe61f0-464a-41cd-a81e-510d187bbe10" containerName="watcher-applier" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.626211 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="a651df8b-eab7-4ba2-9fcd-ac6a87b69548" containerName="mariadb-account-delete" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.626241 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa598a4-a571-48d9-919a-77d7f41fd15a" containerName="watcher-decision-engine" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.627069 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.635347 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zzt66"] Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.722251 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-04da-account-create-update-f22kc"] Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.723180 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.725150 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.737814 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-04da-account-create-update-f22kc"] Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.794759 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ck72\" (UniqueName: \"kubernetes.io/projected/53f84d47-64ad-4221-99ef-6a439e6bd75b-kube-api-access-7ck72\") pod \"watcher-db-create-zzt66\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.794853 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gzhk\" (UniqueName: \"kubernetes.io/projected/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-kube-api-access-4gzhk\") pod \"watcher-04da-account-create-update-f22kc\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.794886 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-operator-scripts\") pod \"watcher-04da-account-create-update-f22kc\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.794908 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f84d47-64ad-4221-99ef-6a439e6bd75b-operator-scripts\") pod \"watcher-db-create-zzt66\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.895717 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gzhk\" (UniqueName: \"kubernetes.io/projected/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-kube-api-access-4gzhk\") pod \"watcher-04da-account-create-update-f22kc\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.895778 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-operator-scripts\") pod \"watcher-04da-account-create-update-f22kc\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.895822 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f84d47-64ad-4221-99ef-6a439e6bd75b-operator-scripts\") pod \"watcher-db-create-zzt66\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.895894 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ck72\" (UniqueName: \"kubernetes.io/projected/53f84d47-64ad-4221-99ef-6a439e6bd75b-kube-api-access-7ck72\") pod \"watcher-db-create-zzt66\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.896903 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-operator-scripts\") pod \"watcher-04da-account-create-update-f22kc\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.896965 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f84d47-64ad-4221-99ef-6a439e6bd75b-operator-scripts\") pod \"watcher-db-create-zzt66\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.912413 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gzhk\" (UniqueName: \"kubernetes.io/projected/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-kube-api-access-4gzhk\") pod \"watcher-04da-account-create-update-f22kc\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.920266 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ck72\" (UniqueName: \"kubernetes.io/projected/53f84d47-64ad-4221-99ef-6a439e6bd75b-kube-api-access-7ck72\") pod \"watcher-db-create-zzt66\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:10 crc kubenswrapper[4704]: I0122 17:00:10.941180 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:11 crc kubenswrapper[4704]: I0122 17:00:11.040129 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:11 crc kubenswrapper[4704]: I0122 17:00:11.405275 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zzt66"] Jan 22 17:00:11 crc kubenswrapper[4704]: I0122 17:00:11.579686 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-04da-account-create-update-f22kc"] Jan 22 17:00:11 crc kubenswrapper[4704]: W0122 17:00:11.584084 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd66d1c3_d3c5_43ce_b451_5d57d24df04b.slice/crio-f7d89fd973c8032f038d9937c4d136f31cf3c4f52c36f8e75e8da39b778b9ff8 WatchSource:0}: Error finding container f7d89fd973c8032f038d9937c4d136f31cf3c4f52c36f8e75e8da39b778b9ff8: Status 404 returned error can't find the container with id f7d89fd973c8032f038d9937c4d136f31cf3c4f52c36f8e75e8da39b778b9ff8 Jan 22 17:00:11 crc kubenswrapper[4704]: I0122 17:00:11.648639 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa598a4-a571-48d9-919a-77d7f41fd15a" path="/var/lib/kubelet/pods/2fa598a4-a571-48d9-919a-77d7f41fd15a/volumes" Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.224179 4704 generic.go:334] "Generic (PLEG): container finished" podID="53f84d47-64ad-4221-99ef-6a439e6bd75b" containerID="87ef7ee88781891fc56a688f64f3535316f5130d5e49e9da15a49f55e356f24f" exitCode=0 Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.224254 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zzt66" event={"ID":"53f84d47-64ad-4221-99ef-6a439e6bd75b","Type":"ContainerDied","Data":"87ef7ee88781891fc56a688f64f3535316f5130d5e49e9da15a49f55e356f24f"} Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.224566 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zzt66" event={"ID":"53f84d47-64ad-4221-99ef-6a439e6bd75b","Type":"ContainerStarted","Data":"2d58a3d19f6e2a8a5f5e661e8c533c471279b4b3c74f5a3fefae5ea6a593bde3"} Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.226402 4704 generic.go:334] "Generic (PLEG): container finished" podID="bd66d1c3-d3c5-43ce-b451-5d57d24df04b" containerID="607ed077fcc302dc95d7ab86055cd7f2920cb11fb0826e68d42feeb8201ed521" exitCode=0 Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.226492 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" event={"ID":"bd66d1c3-d3c5-43ce-b451-5d57d24df04b","Type":"ContainerDied","Data":"607ed077fcc302dc95d7ab86055cd7f2920cb11fb0826e68d42feeb8201ed521"} Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.226527 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" event={"ID":"bd66d1c3-d3c5-43ce-b451-5d57d24df04b","Type":"ContainerStarted","Data":"f7d89fd973c8032f038d9937c4d136f31cf3c4f52c36f8e75e8da39b778b9ff8"} Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.229065 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerStarted","Data":"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a"} Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.229215 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-central-agent" containerID="cri-o://06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" gracePeriod=30 Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.229240 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.229304 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="sg-core" containerID="cri-o://d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" gracePeriod=30 Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.229335 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-notification-agent" containerID="cri-o://7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" gracePeriod=30 Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.229413 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="proxy-httpd" containerID="cri-o://5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" gracePeriod=30 Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.284114 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.382861844 podStartE2EDuration="6.28408491s" podCreationTimestamp="2026-01-22 17:00:06 +0000 UTC" firstStartedPulling="2026-01-22 17:00:07.23649518 +0000 UTC m=+1899.881041870" lastFinishedPulling="2026-01-22 17:00:11.137718246 +0000 UTC m=+1903.782264936" observedRunningTime="2026-01-22 17:00:12.277971317 +0000 UTC m=+1904.922518057" watchObservedRunningTime="2026-01-22 17:00:12.28408491 +0000 UTC m=+1904.928631620" Jan 22 17:00:12 crc kubenswrapper[4704]: I0122 17:00:12.985402 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041112 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-run-httpd\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041223 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-scripts\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041251 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-ceilometer-tls-certs\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041277 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrhc7\" (UniqueName: \"kubernetes.io/projected/63a91bb9-bd13-47c9-954b-b68c6482ea78-kube-api-access-vrhc7\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041299 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-log-httpd\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041380 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-config-data\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041405 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-combined-ca-bundle\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041487 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-sg-core-conf-yaml\") pod \"63a91bb9-bd13-47c9-954b-b68c6482ea78\" (UID: \"63a91bb9-bd13-47c9-954b-b68c6482ea78\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.041928 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.042425 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.047619 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-scripts" (OuterVolumeSpecName: "scripts") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.047968 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a91bb9-bd13-47c9-954b-b68c6482ea78-kube-api-access-vrhc7" (OuterVolumeSpecName: "kube-api-access-vrhc7") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "kube-api-access-vrhc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.066279 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.084276 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.096807 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.143089 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.143140 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.143157 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.143174 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.143192 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.143209 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrhc7\" (UniqueName: \"kubernetes.io/projected/63a91bb9-bd13-47c9-954b-b68c6482ea78-kube-api-access-vrhc7\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.143226 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a91bb9-bd13-47c9-954b-b68c6482ea78-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.154992 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-config-data" (OuterVolumeSpecName: "config-data") pod "63a91bb9-bd13-47c9-954b-b68c6482ea78" (UID: "63a91bb9-bd13-47c9-954b-b68c6482ea78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.238637 4704 generic.go:334] "Generic (PLEG): container finished" podID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerID="5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" exitCode=0 Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.238675 4704 generic.go:334] "Generic (PLEG): container finished" podID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerID="d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" exitCode=2 Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.238688 4704 generic.go:334] "Generic (PLEG): container finished" podID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerID="7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" exitCode=0 Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.238700 4704 generic.go:334] "Generic (PLEG): container finished" podID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerID="06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" exitCode=0 Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.238931 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.253023 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerDied","Data":"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a"} Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.253077 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerDied","Data":"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907"} Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.253088 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerDied","Data":"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f"} Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.253096 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerDied","Data":"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794"} Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.253105 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"63a91bb9-bd13-47c9-954b-b68c6482ea78","Type":"ContainerDied","Data":"274aea33dd507d9a8e995010024c1d40bf47ee8e5632b9765bfe3d7343e66453"} Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.253122 4704 scope.go:117] "RemoveContainer" containerID="5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.254611 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a91bb9-bd13-47c9-954b-b68c6482ea78-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.287095 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.303994 4704 scope.go:117] "RemoveContainer" containerID="d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.316477 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.335845 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.336206 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="sg-core" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336224 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="sg-core" Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.336237 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="proxy-httpd" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336244 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="proxy-httpd" Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.336252 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-central-agent" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336258 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-central-agent" Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.336267 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-notification-agent" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336273 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-notification-agent" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336453 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-central-agent" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336462 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="proxy-httpd" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336474 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="ceilometer-notification-agent" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.336491 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" containerName="sg-core" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.337904 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.343558 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.343687 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.343744 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.353401 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.372058 4704 scope.go:117] "RemoveContainer" containerID="7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.395358 4704 scope.go:117] "RemoveContainer" containerID="06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.417051 4704 scope.go:117] "RemoveContainer" containerID="5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.420624 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": container with ID starting with 5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a not found: ID does not exist" containerID="5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.420666 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a"} err="failed to get container status \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": rpc error: code = NotFound desc = could not find container \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": container with ID starting with 5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.420690 4704 scope.go:117] "RemoveContainer" containerID="d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.421100 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": container with ID starting with d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907 not found: ID does not exist" containerID="d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.421151 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907"} err="failed to get container status \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": rpc error: code = NotFound desc = could not find container \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": container with ID starting with d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.421179 4704 scope.go:117] "RemoveContainer" containerID="7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.421678 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": container with ID starting with 7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f not found: ID does not exist" containerID="7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.421702 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f"} err="failed to get container status \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": rpc error: code = NotFound desc = could not find container \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": container with ID starting with 7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.421715 4704 scope.go:117] "RemoveContainer" containerID="06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" Jan 22 17:00:13 crc kubenswrapper[4704]: E0122 17:00:13.423145 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": container with ID starting with 06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794 not found: ID does not exist" containerID="06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.423167 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794"} err="failed to get container status \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": rpc error: code = NotFound desc = could not find container \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": container with ID starting with 06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.423184 4704 scope.go:117] "RemoveContainer" containerID="5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.423474 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a"} err="failed to get container status \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": rpc error: code = NotFound desc = could not find container \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": container with ID starting with 5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.423492 4704 scope.go:117] "RemoveContainer" containerID="d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.424369 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907"} err="failed to get container status \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": rpc error: code = NotFound desc = could not find container \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": container with ID starting with d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.424393 4704 scope.go:117] "RemoveContainer" containerID="7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.424583 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f"} err="failed to get container status \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": rpc error: code = NotFound desc = could not find container \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": container with ID starting with 7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.424603 4704 scope.go:117] "RemoveContainer" containerID="06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.428246 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794"} err="failed to get container status \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": rpc error: code = NotFound desc = could not find container \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": container with ID starting with 06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.428266 4704 scope.go:117] "RemoveContainer" containerID="5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.428465 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a"} err="failed to get container status \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": rpc error: code = NotFound desc = could not find container \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": container with ID starting with 5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.428492 4704 scope.go:117] "RemoveContainer" containerID="d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.428700 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907"} err="failed to get container status \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": rpc error: code = NotFound desc = could not find container \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": container with ID starting with d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.428723 4704 scope.go:117] "RemoveContainer" containerID="7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.429111 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f"} err="failed to get container status \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": rpc error: code = NotFound desc = could not find container \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": container with ID starting with 7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.429131 4704 scope.go:117] "RemoveContainer" containerID="06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.429279 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794"} err="failed to get container status \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": rpc error: code = NotFound desc = could not find container \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": container with ID starting with 06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.429300 4704 scope.go:117] "RemoveContainer" containerID="5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.430270 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a"} err="failed to get container status \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": rpc error: code = NotFound desc = could not find container \"5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a\": container with ID starting with 5a0c283c5526f6f6540f98b475296713b8269f0095e1a9ba71a91dbaa6afb57a not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.430288 4704 scope.go:117] "RemoveContainer" containerID="d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.431166 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907"} err="failed to get container status \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": rpc error: code = NotFound desc = could not find container \"d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907\": container with ID starting with d8e8b838e276534d30024ddfe22ef0e19fdad3f30c8b414c3527c54e6a85e907 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.431205 4704 scope.go:117] "RemoveContainer" containerID="7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.432024 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f"} err="failed to get container status \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": rpc error: code = NotFound desc = could not find container \"7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f\": container with ID starting with 7fb05c917303b230e12fae08b9389c48a321450d9283b2e89d833155c3e5091f not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.432042 4704 scope.go:117] "RemoveContainer" containerID="06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.432351 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794"} err="failed to get container status \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": rpc error: code = NotFound desc = could not find container \"06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794\": container with ID starting with 06498a473fc6368bcd967d3290359e443e0cb04c7b64e43dfe59da8c66383794 not found: ID does not exist" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.457773 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.457883 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.458078 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-config-data\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.458130 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.458200 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.458225 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-scripts\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.458419 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khb9\" (UniqueName: \"kubernetes.io/projected/90f5862c-6f81-4cef-8d55-0404cd660ad3-kube-api-access-9khb9\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.458471 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559363 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-config-data\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559420 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559469 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559494 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-scripts\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559524 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9khb9\" (UniqueName: \"kubernetes.io/projected/90f5862c-6f81-4cef-8d55-0404cd660ad3-kube-api-access-9khb9\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559545 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559564 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.559586 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.560057 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.564809 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-config-data\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.565273 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.571213 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.584447 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9khb9\" (UniqueName: \"kubernetes.io/projected/90f5862c-6f81-4cef-8d55-0404cd660ad3-kube-api-access-9khb9\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.599467 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.600702 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-scripts\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.601591 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.638599 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.643566 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a91bb9-bd13-47c9-954b-b68c6482ea78" path="/var/lib/kubelet/pods/63a91bb9-bd13-47c9-954b-b68c6482ea78/volumes" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.657554 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.741611 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.763146 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gzhk\" (UniqueName: \"kubernetes.io/projected/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-kube-api-access-4gzhk\") pod \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.763199 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-operator-scripts\") pod \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\" (UID: \"bd66d1c3-d3c5-43ce-b451-5d57d24df04b\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.764776 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd66d1c3-d3c5-43ce-b451-5d57d24df04b" (UID: "bd66d1c3-d3c5-43ce-b451-5d57d24df04b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.769081 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-kube-api-access-4gzhk" (OuterVolumeSpecName: "kube-api-access-4gzhk") pod "bd66d1c3-d3c5-43ce-b451-5d57d24df04b" (UID: "bd66d1c3-d3c5-43ce-b451-5d57d24df04b"). InnerVolumeSpecName "kube-api-access-4gzhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.864308 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ck72\" (UniqueName: \"kubernetes.io/projected/53f84d47-64ad-4221-99ef-6a439e6bd75b-kube-api-access-7ck72\") pod \"53f84d47-64ad-4221-99ef-6a439e6bd75b\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.864390 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f84d47-64ad-4221-99ef-6a439e6bd75b-operator-scripts\") pod \"53f84d47-64ad-4221-99ef-6a439e6bd75b\" (UID: \"53f84d47-64ad-4221-99ef-6a439e6bd75b\") " Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.864988 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gzhk\" (UniqueName: \"kubernetes.io/projected/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-kube-api-access-4gzhk\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.865012 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd66d1c3-d3c5-43ce-b451-5d57d24df04b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.865496 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f84d47-64ad-4221-99ef-6a439e6bd75b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53f84d47-64ad-4221-99ef-6a439e6bd75b" (UID: "53f84d47-64ad-4221-99ef-6a439e6bd75b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.868187 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f84d47-64ad-4221-99ef-6a439e6bd75b-kube-api-access-7ck72" (OuterVolumeSpecName: "kube-api-access-7ck72") pod "53f84d47-64ad-4221-99ef-6a439e6bd75b" (UID: "53f84d47-64ad-4221-99ef-6a439e6bd75b"). InnerVolumeSpecName "kube-api-access-7ck72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.966760 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ck72\" (UniqueName: \"kubernetes.io/projected/53f84d47-64ad-4221-99ef-6a439e6bd75b-kube-api-access-7ck72\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:13 crc kubenswrapper[4704]: I0122 17:00:13.966898 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f84d47-64ad-4221-99ef-6a439e6bd75b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.092525 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:14 crc kubenswrapper[4704]: W0122 17:00:14.094429 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f5862c_6f81_4cef_8d55_0404cd660ad3.slice/crio-39f3088fc4e91e5cfad1b3602a1c06703f65757320e94d2d947e118b4c82d74a WatchSource:0}: Error finding container 39f3088fc4e91e5cfad1b3602a1c06703f65757320e94d2d947e118b4c82d74a: Status 404 returned error can't find the container with id 39f3088fc4e91e5cfad1b3602a1c06703f65757320e94d2d947e118b4c82d74a Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.267735 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerStarted","Data":"39f3088fc4e91e5cfad1b3602a1c06703f65757320e94d2d947e118b4c82d74a"} Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.269206 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zzt66" event={"ID":"53f84d47-64ad-4221-99ef-6a439e6bd75b","Type":"ContainerDied","Data":"2d58a3d19f6e2a8a5f5e661e8c533c471279b4b3c74f5a3fefae5ea6a593bde3"} Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.269237 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d58a3d19f6e2a8a5f5e661e8c533c471279b4b3c74f5a3fefae5ea6a593bde3" Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.269249 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zzt66" Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.270840 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.270852 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-04da-account-create-update-f22kc" event={"ID":"bd66d1c3-d3c5-43ce-b451-5d57d24df04b","Type":"ContainerDied","Data":"f7d89fd973c8032f038d9937c4d136f31cf3c4f52c36f8e75e8da39b778b9ff8"} Jan 22 17:00:14 crc kubenswrapper[4704]: I0122 17:00:14.270900 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7d89fd973c8032f038d9937c4d136f31cf3c4f52c36f8e75e8da39b778b9ff8" Jan 22 17:00:15 crc kubenswrapper[4704]: I0122 17:00:15.280485 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerStarted","Data":"e2c85bafcadb8c56d9bb41caad0cd3550f7aadcd73bb02cefb0ce4349d4894bb"} Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.136479 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg"] Jan 22 17:00:16 crc kubenswrapper[4704]: E0122 17:00:16.137088 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd66d1c3-d3c5-43ce-b451-5d57d24df04b" containerName="mariadb-account-create-update" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.137105 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd66d1c3-d3c5-43ce-b451-5d57d24df04b" containerName="mariadb-account-create-update" Jan 22 17:00:16 crc kubenswrapper[4704]: E0122 17:00:16.137119 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f84d47-64ad-4221-99ef-6a439e6bd75b" containerName="mariadb-database-create" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.137127 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f84d47-64ad-4221-99ef-6a439e6bd75b" containerName="mariadb-database-create" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.137283 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f84d47-64ad-4221-99ef-6a439e6bd75b" containerName="mariadb-database-create" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.137333 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd66d1c3-d3c5-43ce-b451-5d57d24df04b" containerName="mariadb-account-create-update" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.137832 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.140629 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-fkbrv" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.141033 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.154949 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg"] Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.199668 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-config-data\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.199739 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.199888 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-db-sync-config-data\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.200033 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttf6\" (UniqueName: \"kubernetes.io/projected/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-kube-api-access-dttf6\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.289460 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerStarted","Data":"6d05c4ec8286a450d6103ebbd8942fdbd17ccf45bc9fc6ee465c6f23634ed837"} Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.289506 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerStarted","Data":"fe777c43970b5bb471be8a93a8dcc225fc799039e0d4a7e123eddc9d4b24b588"} Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.300920 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-db-sync-config-data\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.301053 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dttf6\" (UniqueName: \"kubernetes.io/projected/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-kube-api-access-dttf6\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.301083 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-config-data\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.301155 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.305347 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.306493 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-db-sync-config-data\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.306846 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-config-data\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.317858 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dttf6\" (UniqueName: \"kubernetes.io/projected/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-kube-api-access-dttf6\") pod \"watcher-kuttl-db-sync-qm2rg\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.452812 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:16 crc kubenswrapper[4704]: I0122 17:00:16.929328 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg"] Jan 22 17:00:17 crc kubenswrapper[4704]: I0122 17:00:17.298035 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" event={"ID":"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e","Type":"ContainerStarted","Data":"87dfac8c6c171566ca87abb5fba83bceaad80469729a082951d9a889cd7b5a86"} Jan 22 17:00:17 crc kubenswrapper[4704]: I0122 17:00:17.298348 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" event={"ID":"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e","Type":"ContainerStarted","Data":"70f85a8bf699af9888e2957f61257136183b651282df3dbc43f9c86304e21647"} Jan 22 17:00:17 crc kubenswrapper[4704]: I0122 17:00:17.316455 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" podStartSLOduration=1.316439388 podStartE2EDuration="1.316439388s" podCreationTimestamp="2026-01-22 17:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:00:17.313099894 +0000 UTC m=+1909.957646594" watchObservedRunningTime="2026-01-22 17:00:17.316439388 +0000 UTC m=+1909.960986088" Jan 22 17:00:18 crc kubenswrapper[4704]: I0122 17:00:18.308060 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerStarted","Data":"7a31edc6e53a9a8e813ba577795d68060ff3e348889490e67a80642e1e75c6c2"} Jan 22 17:00:18 crc kubenswrapper[4704]: I0122 17:00:18.308290 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:18 crc kubenswrapper[4704]: I0122 17:00:18.339940 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.8401445 podStartE2EDuration="5.339920731s" podCreationTimestamp="2026-01-22 17:00:13 +0000 UTC" firstStartedPulling="2026-01-22 17:00:14.096787805 +0000 UTC m=+1906.741334515" lastFinishedPulling="2026-01-22 17:00:17.596564046 +0000 UTC m=+1910.241110746" observedRunningTime="2026-01-22 17:00:18.329897677 +0000 UTC m=+1910.974444377" watchObservedRunningTime="2026-01-22 17:00:18.339920731 +0000 UTC m=+1910.984467441" Jan 22 17:00:20 crc kubenswrapper[4704]: I0122 17:00:20.328536 4704 generic.go:334] "Generic (PLEG): container finished" podID="26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" containerID="87dfac8c6c171566ca87abb5fba83bceaad80469729a082951d9a889cd7b5a86" exitCode=0 Jan 22 17:00:20 crc kubenswrapper[4704]: I0122 17:00:20.328720 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" event={"ID":"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e","Type":"ContainerDied","Data":"87dfac8c6c171566ca87abb5fba83bceaad80469729a082951d9a889cd7b5a86"} Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.653137 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.800050 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-combined-ca-bundle\") pod \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.800141 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dttf6\" (UniqueName: \"kubernetes.io/projected/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-kube-api-access-dttf6\") pod \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.800164 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-config-data\") pod \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.800289 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-db-sync-config-data\") pod \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\" (UID: \"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e\") " Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.807357 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-kube-api-access-dttf6" (OuterVolumeSpecName: "kube-api-access-dttf6") pod "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" (UID: "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e"). InnerVolumeSpecName "kube-api-access-dttf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.808267 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" (UID: "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.834930 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" (UID: "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.855969 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-config-data" (OuterVolumeSpecName: "config-data") pod "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" (UID: "26bb0aee-8347-4b52-b19d-ef0cd4d1a29e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.902456 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.902490 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.902500 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dttf6\" (UniqueName: \"kubernetes.io/projected/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-kube-api-access-dttf6\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:21 crc kubenswrapper[4704]: I0122 17:00:21.902511 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.350825 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" event={"ID":"26bb0aee-8347-4b52-b19d-ef0cd4d1a29e","Type":"ContainerDied","Data":"70f85a8bf699af9888e2957f61257136183b651282df3dbc43f9c86304e21647"} Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.350875 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70f85a8bf699af9888e2957f61257136183b651282df3dbc43f9c86304e21647" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.350953 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.606238 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:00:22 crc kubenswrapper[4704]: E0122 17:00:22.606545 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" containerName="watcher-kuttl-db-sync" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.606558 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" containerName="watcher-kuttl-db-sync" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.606705 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" containerName="watcher-kuttl-db-sync" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.607520 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.612054 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-fkbrv" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.620991 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.621052 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.621085 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.621146 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.621196 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfxj\" (UniqueName: \"kubernetes.io/projected/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-kube-api-access-8wfxj\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.621414 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.624460 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.626054 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.658513 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.659445 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.664737 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.668609 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.722824 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.722899 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.722925 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.722946 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.722978 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.723002 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.723018 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.723039 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5t8h\" (UniqueName: \"kubernetes.io/projected/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-kube-api-access-q5t8h\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.723054 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.723079 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.723099 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfxj\" (UniqueName: \"kubernetes.io/projected/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-kube-api-access-8wfxj\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.724251 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.730048 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.731477 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.741386 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.742426 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.742982 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.744256 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.746250 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfxj\" (UniqueName: \"kubernetes.io/projected/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-kube-api-access-8wfxj\") pod \"watcher-kuttl-api-0\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.746432 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.755095 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824121 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfnr\" (UniqueName: \"kubernetes.io/projected/653f8f63-8758-4a25-a51b-20169bfbce50-kube-api-access-lnfnr\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824191 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824217 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824247 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824270 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824291 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5t8h\" (UniqueName: \"kubernetes.io/projected/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-kube-api-access-q5t8h\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824312 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824340 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824363 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653f8f63-8758-4a25-a51b-20169bfbce50-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824383 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824408 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.824661 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.827599 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.827682 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.828002 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.845447 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5t8h\" (UniqueName: \"kubernetes.io/projected/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-kube-api-access-q5t8h\") pod \"watcher-kuttl-applier-0\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.925603 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfnr\" (UniqueName: \"kubernetes.io/projected/653f8f63-8758-4a25-a51b-20169bfbce50-kube-api-access-lnfnr\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.925707 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.925732 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.925774 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653f8f63-8758-4a25-a51b-20169bfbce50-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.925858 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.925885 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.926393 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653f8f63-8758-4a25-a51b-20169bfbce50-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.929073 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.929210 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.932480 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.943417 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.943762 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.948677 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfnr\" (UniqueName: \"kubernetes.io/projected/653f8f63-8758-4a25-a51b-20169bfbce50-kube-api-access-lnfnr\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:22 crc kubenswrapper[4704]: I0122 17:00:22.975839 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:23 crc kubenswrapper[4704]: I0122 17:00:23.110361 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:23 crc kubenswrapper[4704]: I0122 17:00:23.405649 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:00:23 crc kubenswrapper[4704]: I0122 17:00:23.522158 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:00:23 crc kubenswrapper[4704]: W0122 17:00:23.631488 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod653f8f63_8758_4a25_a51b_20169bfbce50.slice/crio-ce99b7cd74f952b574724a16e15857bb2f04ab1948d63308a23624ef65760f82 WatchSource:0}: Error finding container ce99b7cd74f952b574724a16e15857bb2f04ab1948d63308a23624ef65760f82: Status 404 returned error can't find the container with id ce99b7cd74f952b574724a16e15857bb2f04ab1948d63308a23624ef65760f82 Jan 22 17:00:23 crc kubenswrapper[4704]: I0122 17:00:23.659909 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.366102 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"653f8f63-8758-4a25-a51b-20169bfbce50","Type":"ContainerStarted","Data":"639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6"} Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.366435 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"653f8f63-8758-4a25-a51b-20169bfbce50","Type":"ContainerStarted","Data":"ce99b7cd74f952b574724a16e15857bb2f04ab1948d63308a23624ef65760f82"} Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.369370 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95","Type":"ContainerStarted","Data":"cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076"} Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.369417 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95","Type":"ContainerStarted","Data":"8b90336c7b282fc4263faeca4b68b444442abebde58434f8ec784055897cb0b8"} Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.377435 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6b594a1-4164-40a3-8814-0fb00c3fb8b2","Type":"ContainerStarted","Data":"6fd9f84337d32aaec0b2259446873daf6a2a6b9ad3e832040170b6b25c3a23dd"} Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.377483 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6b594a1-4164-40a3-8814-0fb00c3fb8b2","Type":"ContainerStarted","Data":"710d67066b59525bf4a66854465e07cdc014f82c78a4ebe4b6a984b070cc168f"} Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.377499 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6b594a1-4164-40a3-8814-0fb00c3fb8b2","Type":"ContainerStarted","Data":"5d324ee52d71cbb139afccf9dbfffe2d888828ebd2894b11617e64e7abc50912"} Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.377693 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.392998 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.392976273 podStartE2EDuration="2.392976273s" podCreationTimestamp="2026-01-22 17:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:00:24.387114437 +0000 UTC m=+1917.031661137" watchObservedRunningTime="2026-01-22 17:00:24.392976273 +0000 UTC m=+1917.037522973" Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.414707 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.414685558 podStartE2EDuration="2.414685558s" podCreationTimestamp="2026-01-22 17:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:00:24.409595364 +0000 UTC m=+1917.054142064" watchObservedRunningTime="2026-01-22 17:00:24.414685558 +0000 UTC m=+1917.059232258" Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.434746 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.434726675 podStartE2EDuration="2.434726675s" podCreationTimestamp="2026-01-22 17:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:00:24.429824786 +0000 UTC m=+1917.074371486" watchObservedRunningTime="2026-01-22 17:00:24.434726675 +0000 UTC m=+1917.079273375" Jan 22 17:00:24 crc kubenswrapper[4704]: I0122 17:00:24.448892 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:25 crc kubenswrapper[4704]: I0122 17:00:25.641013 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:26 crc kubenswrapper[4704]: I0122 17:00:26.841703 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:26 crc kubenswrapper[4704]: I0122 17:00:26.853115 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:27 crc kubenswrapper[4704]: I0122 17:00:27.945021 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:27 crc kubenswrapper[4704]: I0122 17:00:27.976509 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:28 crc kubenswrapper[4704]: I0122 17:00:28.120842 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:29 crc kubenswrapper[4704]: I0122 17:00:29.364144 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:30 crc kubenswrapper[4704]: I0122 17:00:30.549819 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:31 crc kubenswrapper[4704]: I0122 17:00:31.746588 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:32 crc kubenswrapper[4704]: I0122 17:00:32.942691 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:32 crc kubenswrapper[4704]: I0122 17:00:32.944630 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:32 crc kubenswrapper[4704]: I0122 17:00:32.949707 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:32 crc kubenswrapper[4704]: I0122 17:00:32.976801 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:33 crc kubenswrapper[4704]: I0122 17:00:33.009837 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:33 crc kubenswrapper[4704]: I0122 17:00:33.111787 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:33 crc kubenswrapper[4704]: I0122 17:00:33.133779 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:33 crc kubenswrapper[4704]: I0122 17:00:33.452854 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:33 crc kubenswrapper[4704]: I0122 17:00:33.459621 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:00:33 crc kubenswrapper[4704]: I0122 17:00:33.475549 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:00:33 crc kubenswrapper[4704]: I0122 17:00:33.506285 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.122850 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.383200 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.742813 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-create-l9d46"] Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.743835 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.763353 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-l9d46"] Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.850333 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-0423-account-create-update-dwp5g"] Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.853766 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.856575 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-db-secret" Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.859712 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-0423-account-create-update-dwp5g"] Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.913803 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d010716c-c2ec-4f59-9c18-19b48ec26d8f-operator-scripts\") pod \"cinder-db-create-l9d46\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:34 crc kubenswrapper[4704]: I0122 17:00:34.914095 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbjt\" (UniqueName: \"kubernetes.io/projected/d010716c-c2ec-4f59-9c18-19b48ec26d8f-kube-api-access-4nbjt\") pod \"cinder-db-create-l9d46\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.015511 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr5cl\" (UniqueName: \"kubernetes.io/projected/72a4c89f-490a-477e-a824-d415cd7e8d3b-kube-api-access-kr5cl\") pod \"cinder-0423-account-create-update-dwp5g\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.015743 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72a4c89f-490a-477e-a824-d415cd7e8d3b-operator-scripts\") pod \"cinder-0423-account-create-update-dwp5g\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.015977 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d010716c-c2ec-4f59-9c18-19b48ec26d8f-operator-scripts\") pod \"cinder-db-create-l9d46\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.016112 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbjt\" (UniqueName: \"kubernetes.io/projected/d010716c-c2ec-4f59-9c18-19b48ec26d8f-kube-api-access-4nbjt\") pod \"cinder-db-create-l9d46\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.016910 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d010716c-c2ec-4f59-9c18-19b48ec26d8f-operator-scripts\") pod \"cinder-db-create-l9d46\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.045731 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbjt\" (UniqueName: \"kubernetes.io/projected/d010716c-c2ec-4f59-9c18-19b48ec26d8f-kube-api-access-4nbjt\") pod \"cinder-db-create-l9d46\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.068304 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.117248 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72a4c89f-490a-477e-a824-d415cd7e8d3b-operator-scripts\") pod \"cinder-0423-account-create-update-dwp5g\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.117408 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr5cl\" (UniqueName: \"kubernetes.io/projected/72a4c89f-490a-477e-a824-d415cd7e8d3b-kube-api-access-kr5cl\") pod \"cinder-0423-account-create-update-dwp5g\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.117989 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72a4c89f-490a-477e-a824-d415cd7e8d3b-operator-scripts\") pod \"cinder-0423-account-create-update-dwp5g\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.177222 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr5cl\" (UniqueName: \"kubernetes.io/projected/72a4c89f-490a-477e-a824-d415cd7e8d3b-kube-api-access-kr5cl\") pod \"cinder-0423-account-create-update-dwp5g\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.473398 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.572455 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.677533 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-l9d46"] Jan 22 17:00:35 crc kubenswrapper[4704]: W0122 17:00:35.692663 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd010716c_c2ec_4f59_9c18_19b48ec26d8f.slice/crio-acdfa292796a015501ba1a785bb5c3aeeeb6f2eae8a94ad493487740cbfcb6ea WatchSource:0}: Error finding container acdfa292796a015501ba1a785bb5c3aeeeb6f2eae8a94ad493487740cbfcb6ea: Status 404 returned error can't find the container with id acdfa292796a015501ba1a785bb5c3aeeeb6f2eae8a94ad493487740cbfcb6ea Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.911909 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.912238 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-central-agent" containerID="cri-o://e2c85bafcadb8c56d9bb41caad0cd3550f7aadcd73bb02cefb0ce4349d4894bb" gracePeriod=30 Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.912393 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="proxy-httpd" containerID="cri-o://7a31edc6e53a9a8e813ba577795d68060ff3e348889490e67a80642e1e75c6c2" gracePeriod=30 Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.912449 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="sg-core" containerID="cri-o://6d05c4ec8286a450d6103ebbd8942fdbd17ccf45bc9fc6ee465c6f23634ed837" gracePeriod=30 Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.912502 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-notification-agent" containerID="cri-o://fe777c43970b5bb471be8a93a8dcc225fc799039e0d4a7e123eddc9d4b24b588" gracePeriod=30 Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.935712 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 22 17:00:35 crc kubenswrapper[4704]: I0122 17:00:35.987592 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-0423-account-create-update-dwp5g"] Jan 22 17:00:35 crc kubenswrapper[4704]: W0122 17:00:35.988451 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a4c89f_490a_477e_a824_d415cd7e8d3b.slice/crio-e9933a4453ce0ae83d679accf733ecb2feea74c6fd894dc3e6921f2f5e3102f2 WatchSource:0}: Error finding container e9933a4453ce0ae83d679accf733ecb2feea74c6fd894dc3e6921f2f5e3102f2: Status 404 returned error can't find the container with id e9933a4453ce0ae83d679accf733ecb2feea74c6fd894dc3e6921f2f5e3102f2 Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.475521 4704 generic.go:334] "Generic (PLEG): container finished" podID="d010716c-c2ec-4f59-9c18-19b48ec26d8f" containerID="b36d471f3f5a62e16ab896f557903f89884ab6b1b0d09f008d48332194baf72d" exitCode=0 Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.475581 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-l9d46" event={"ID":"d010716c-c2ec-4f59-9c18-19b48ec26d8f","Type":"ContainerDied","Data":"b36d471f3f5a62e16ab896f557903f89884ab6b1b0d09f008d48332194baf72d"} Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.475880 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-l9d46" event={"ID":"d010716c-c2ec-4f59-9c18-19b48ec26d8f","Type":"ContainerStarted","Data":"acdfa292796a015501ba1a785bb5c3aeeeb6f2eae8a94ad493487740cbfcb6ea"} Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.477726 4704 generic.go:334] "Generic (PLEG): container finished" podID="72a4c89f-490a-477e-a824-d415cd7e8d3b" containerID="784059c66a6c881f4f8187b6cdb1d8b3c9e40f01195ed03b40033f1abc354dcb" exitCode=0 Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.477878 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" event={"ID":"72a4c89f-490a-477e-a824-d415cd7e8d3b","Type":"ContainerDied","Data":"784059c66a6c881f4f8187b6cdb1d8b3c9e40f01195ed03b40033f1abc354dcb"} Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.477935 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" event={"ID":"72a4c89f-490a-477e-a824-d415cd7e8d3b","Type":"ContainerStarted","Data":"e9933a4453ce0ae83d679accf733ecb2feea74c6fd894dc3e6921f2f5e3102f2"} Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.484517 4704 generic.go:334] "Generic (PLEG): container finished" podID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerID="7a31edc6e53a9a8e813ba577795d68060ff3e348889490e67a80642e1e75c6c2" exitCode=0 Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.484538 4704 generic.go:334] "Generic (PLEG): container finished" podID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerID="6d05c4ec8286a450d6103ebbd8942fdbd17ccf45bc9fc6ee465c6f23634ed837" exitCode=2 Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.484546 4704 generic.go:334] "Generic (PLEG): container finished" podID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerID="e2c85bafcadb8c56d9bb41caad0cd3550f7aadcd73bb02cefb0ce4349d4894bb" exitCode=0 Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.484565 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerDied","Data":"7a31edc6e53a9a8e813ba577795d68060ff3e348889490e67a80642e1e75c6c2"} Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.484585 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerDied","Data":"6d05c4ec8286a450d6103ebbd8942fdbd17ccf45bc9fc6ee465c6f23634ed837"} Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.484595 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerDied","Data":"e2c85bafcadb8c56d9bb41caad0cd3550f7aadcd73bb02cefb0ce4349d4894bb"} Jan 22 17:00:36 crc kubenswrapper[4704]: I0122 17:00:36.774824 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.506651 4704 generic.go:334] "Generic (PLEG): container finished" podID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerID="fe777c43970b5bb471be8a93a8dcc225fc799039e0d4a7e123eddc9d4b24b588" exitCode=0 Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.506745 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerDied","Data":"fe777c43970b5bb471be8a93a8dcc225fc799039e0d4a7e123eddc9d4b24b588"} Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.750543 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.779844 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-sg-core-conf-yaml\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.779901 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-run-httpd\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.779923 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-scripts\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.779939 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-config-data\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.780053 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-log-httpd\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.780106 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-combined-ca-bundle\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.780137 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9khb9\" (UniqueName: \"kubernetes.io/projected/90f5862c-6f81-4cef-8d55-0404cd660ad3-kube-api-access-9khb9\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.780169 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-ceilometer-tls-certs\") pod \"90f5862c-6f81-4cef-8d55-0404cd660ad3\" (UID: \"90f5862c-6f81-4cef-8d55-0404cd660ad3\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.785581 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.788395 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.809023 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-scripts" (OuterVolumeSpecName: "scripts") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.809367 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f5862c-6f81-4cef-8d55-0404cd660ad3-kube-api-access-9khb9" (OuterVolumeSpecName: "kube-api-access-9khb9") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "kube-api-access-9khb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.829401 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.837276 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.881568 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9khb9\" (UniqueName: \"kubernetes.io/projected/90f5862c-6f81-4cef-8d55-0404cd660ad3-kube-api-access-9khb9\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.881626 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.881638 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.881648 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.881657 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.881665 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5862c-6f81-4cef-8d55-0404cd660ad3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.906045 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.926115 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.966706 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-config-data" (OuterVolumeSpecName: "config-data") pod "90f5862c-6f81-4cef-8d55-0404cd660ad3" (UID: "90f5862c-6f81-4cef-8d55-0404cd660ad3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.973066 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.978576 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.988274 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d010716c-c2ec-4f59-9c18-19b48ec26d8f-operator-scripts\") pod \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.988350 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72a4c89f-490a-477e-a824-d415cd7e8d3b-operator-scripts\") pod \"72a4c89f-490a-477e-a824-d415cd7e8d3b\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.988436 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr5cl\" (UniqueName: \"kubernetes.io/projected/72a4c89f-490a-477e-a824-d415cd7e8d3b-kube-api-access-kr5cl\") pod \"72a4c89f-490a-477e-a824-d415cd7e8d3b\" (UID: \"72a4c89f-490a-477e-a824-d415cd7e8d3b\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.988487 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nbjt\" (UniqueName: \"kubernetes.io/projected/d010716c-c2ec-4f59-9c18-19b48ec26d8f-kube-api-access-4nbjt\") pod \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\" (UID: \"d010716c-c2ec-4f59-9c18-19b48ec26d8f\") " Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.988682 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.988699 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5862c-6f81-4cef-8d55-0404cd660ad3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.989647 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72a4c89f-490a-477e-a824-d415cd7e8d3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72a4c89f-490a-477e-a824-d415cd7e8d3b" (UID: "72a4c89f-490a-477e-a824-d415cd7e8d3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:00:37 crc kubenswrapper[4704]: I0122 17:00:37.990062 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d010716c-c2ec-4f59-9c18-19b48ec26d8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d010716c-c2ec-4f59-9c18-19b48ec26d8f" (UID: "d010716c-c2ec-4f59-9c18-19b48ec26d8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.002751 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a4c89f-490a-477e-a824-d415cd7e8d3b-kube-api-access-kr5cl" (OuterVolumeSpecName: "kube-api-access-kr5cl") pod "72a4c89f-490a-477e-a824-d415cd7e8d3b" (UID: "72a4c89f-490a-477e-a824-d415cd7e8d3b"). InnerVolumeSpecName "kube-api-access-kr5cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.003003 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d010716c-c2ec-4f59-9c18-19b48ec26d8f-kube-api-access-4nbjt" (OuterVolumeSpecName: "kube-api-access-4nbjt") pod "d010716c-c2ec-4f59-9c18-19b48ec26d8f" (UID: "d010716c-c2ec-4f59-9c18-19b48ec26d8f"). InnerVolumeSpecName "kube-api-access-4nbjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.089836 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d010716c-c2ec-4f59-9c18-19b48ec26d8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.090168 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72a4c89f-490a-477e-a824-d415cd7e8d3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.090182 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr5cl\" (UniqueName: \"kubernetes.io/projected/72a4c89f-490a-477e-a824-d415cd7e8d3b-kube-api-access-kr5cl\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.090193 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nbjt\" (UniqueName: \"kubernetes.io/projected/d010716c-c2ec-4f59-9c18-19b48ec26d8f-kube-api-access-4nbjt\") on node \"crc\" DevicePath \"\"" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.375152 4704 scope.go:117] "RemoveContainer" containerID="b8cfc689aadb7eb7dd0b663b4cc8e963f6354138302d758c5c396ca6e30a0497" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.395855 4704 scope.go:117] "RemoveContainer" containerID="264f2b0fab046086e5221dca03bf024561e0ba8a3035b810dc2bc349a3fd331a" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.416919 4704 scope.go:117] "RemoveContainer" containerID="42e6953a7ae21d3be1de4329f293ebcf76f7dbd9401643e140639f63099dd8b9" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.515096 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-l9d46" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.515096 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-l9d46" event={"ID":"d010716c-c2ec-4f59-9c18-19b48ec26d8f","Type":"ContainerDied","Data":"acdfa292796a015501ba1a785bb5c3aeeeb6f2eae8a94ad493487740cbfcb6ea"} Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.515215 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acdfa292796a015501ba1a785bb5c3aeeeb6f2eae8a94ad493487740cbfcb6ea" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.518850 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"90f5862c-6f81-4cef-8d55-0404cd660ad3","Type":"ContainerDied","Data":"39f3088fc4e91e5cfad1b3602a1c06703f65757320e94d2d947e118b4c82d74a"} Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.518908 4704 scope.go:117] "RemoveContainer" containerID="7a31edc6e53a9a8e813ba577795d68060ff3e348889490e67a80642e1e75c6c2" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.519048 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.524460 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" event={"ID":"72a4c89f-490a-477e-a824-d415cd7e8d3b","Type":"ContainerDied","Data":"e9933a4453ce0ae83d679accf733ecb2feea74c6fd894dc3e6921f2f5e3102f2"} Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.524487 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-0423-account-create-update-dwp5g" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.524502 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9933a4453ce0ae83d679accf733ecb2feea74c6fd894dc3e6921f2f5e3102f2" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.553402 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.557649 4704 scope.go:117] "RemoveContainer" containerID="6d05c4ec8286a450d6103ebbd8942fdbd17ccf45bc9fc6ee465c6f23634ed837" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.562517 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.577664 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:38 crc kubenswrapper[4704]: E0122 17:00:38.577983 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d010716c-c2ec-4f59-9c18-19b48ec26d8f" containerName="mariadb-database-create" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.577995 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d010716c-c2ec-4f59-9c18-19b48ec26d8f" containerName="mariadb-database-create" Jan 22 17:00:38 crc kubenswrapper[4704]: E0122 17:00:38.578010 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="sg-core" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578016 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="sg-core" Jan 22 17:00:38 crc kubenswrapper[4704]: E0122 17:00:38.578034 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="proxy-httpd" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578040 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="proxy-httpd" Jan 22 17:00:38 crc kubenswrapper[4704]: E0122 17:00:38.578049 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-notification-agent" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578055 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-notification-agent" Jan 22 17:00:38 crc kubenswrapper[4704]: E0122 17:00:38.578068 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-central-agent" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578091 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-central-agent" Jan 22 17:00:38 crc kubenswrapper[4704]: E0122 17:00:38.578107 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a4c89f-490a-477e-a824-d415cd7e8d3b" containerName="mariadb-account-create-update" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578113 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a4c89f-490a-477e-a824-d415cd7e8d3b" containerName="mariadb-account-create-update" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578245 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d010716c-c2ec-4f59-9c18-19b48ec26d8f" containerName="mariadb-database-create" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578257 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-central-agent" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578266 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="proxy-httpd" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578277 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="sg-core" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578287 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" containerName="ceilometer-notification-agent" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.578298 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a4c89f-490a-477e-a824-d415cd7e8d3b" containerName="mariadb-account-create-update" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.579723 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.581647 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.582107 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.582309 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.596031 4704 scope.go:117] "RemoveContainer" containerID="fe777c43970b5bb471be8a93a8dcc225fc799039e0d4a7e123eddc9d4b24b588" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.614391 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.636102 4704 scope.go:117] "RemoveContainer" containerID="e2c85bafcadb8c56d9bb41caad0cd3550f7aadcd73bb02cefb0ce4349d4894bb" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703101 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703156 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-scripts\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703212 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-run-httpd\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703253 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-config-data\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703304 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msrb\" (UniqueName: \"kubernetes.io/projected/544df65b-383c-41da-94b8-914c47c3e146-kube-api-access-9msrb\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703342 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703385 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.703418 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-log-httpd\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.805071 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.805165 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-log-httpd\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.805255 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.807673 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-scripts\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.807900 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-run-httpd\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.807996 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-config-data\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.808060 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9msrb\" (UniqueName: \"kubernetes.io/projected/544df65b-383c-41da-94b8-914c47c3e146-kube-api-access-9msrb\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.808129 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.806265 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-log-httpd\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.809318 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-run-httpd\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.811143 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.811779 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.812872 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-config-data\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.815077 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.829731 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-scripts\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.833410 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9msrb\" (UniqueName: \"kubernetes.io/projected/544df65b-383c-41da-94b8-914c47c3e146-kube-api-access-9msrb\") pod \"ceilometer-0\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:38 crc kubenswrapper[4704]: I0122 17:00:38.909169 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:39 crc kubenswrapper[4704]: I0122 17:00:39.155659 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:39 crc kubenswrapper[4704]: I0122 17:00:39.350224 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:00:39 crc kubenswrapper[4704]: I0122 17:00:39.535630 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerStarted","Data":"8094b22d3dff39c86edf9731b08392bdf4257f6e1876327256b00a567d696ec0"} Jan 22 17:00:39 crc kubenswrapper[4704]: I0122 17:00:39.643752 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f5862c-6f81-4cef-8d55-0404cd660ad3" path="/var/lib/kubelet/pods/90f5862c-6f81-4cef-8d55-0404cd660ad3/volumes" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.090526 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-sync-42hqh"] Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.092601 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.097647 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-cnbkb" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.098040 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.098320 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.109691 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-42hqh"] Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.229142 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-combined-ca-bundle\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.229522 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-config-data\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.229600 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b97177-c5dd-4e1c-bc12-a24678377554-etc-machine-id\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.229671 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-scripts\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.229714 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-db-sync-config-data\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.229868 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658tr\" (UniqueName: \"kubernetes.io/projected/30b97177-c5dd-4e1c-bc12-a24678377554-kube-api-access-658tr\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.330837 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-scripts\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.330879 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-db-sync-config-data\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.330941 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658tr\" (UniqueName: \"kubernetes.io/projected/30b97177-c5dd-4e1c-bc12-a24678377554-kube-api-access-658tr\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.330970 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-combined-ca-bundle\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.331016 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-config-data\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.331046 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b97177-c5dd-4e1c-bc12-a24678377554-etc-machine-id\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.331111 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b97177-c5dd-4e1c-bc12-a24678377554-etc-machine-id\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.339396 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-db-sync-config-data\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.339496 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-scripts\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.339694 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-combined-ca-bundle\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.349605 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658tr\" (UniqueName: \"kubernetes.io/projected/30b97177-c5dd-4e1c-bc12-a24678377554-kube-api-access-658tr\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.350117 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-config-data\") pod \"cinder-db-sync-42hqh\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.354780 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.515037 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:00:40 crc kubenswrapper[4704]: I0122 17:00:40.552154 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerStarted","Data":"85ce20a5f0a0c8aa1b6a12a678f33ec9de874c06ac1a8b7c5050afd74a40eea8"} Jan 22 17:00:41 crc kubenswrapper[4704]: I0122 17:00:41.017542 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-42hqh"] Jan 22 17:00:41 crc kubenswrapper[4704]: I0122 17:00:41.544600 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:41 crc kubenswrapper[4704]: I0122 17:00:41.561213 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerStarted","Data":"c94d9849c3ce2e7d0f909583a170a3fcf0a99662febc2b2fb44beb15f503125a"} Jan 22 17:00:41 crc kubenswrapper[4704]: I0122 17:00:41.562628 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-42hqh" event={"ID":"30b97177-c5dd-4e1c-bc12-a24678377554","Type":"ContainerStarted","Data":"29dcaca86793b85969ed118594925d8da3e5c4b4a853d2227a24afd2723c5b2f"} Jan 22 17:00:42 crc kubenswrapper[4704]: I0122 17:00:42.574545 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerStarted","Data":"1575cfdfd8c36defc6b08cdb3a5e7ee4f2bb9f4c6c2241af64706efb3b0f6112"} Jan 22 17:00:42 crc kubenswrapper[4704]: I0122 17:00:42.781889 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:43 crc kubenswrapper[4704]: I0122 17:00:43.594749 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerStarted","Data":"bf55c0ce75b66f26f88fb2a825fb47999f6e655610a2300b0b4d1a8ff2e8769f"} Jan 22 17:00:43 crc kubenswrapper[4704]: I0122 17:00:43.601979 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:00:43 crc kubenswrapper[4704]: I0122 17:00:43.635811 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.7028912 podStartE2EDuration="5.635728072s" podCreationTimestamp="2026-01-22 17:00:38 +0000 UTC" firstStartedPulling="2026-01-22 17:00:39.29213804 +0000 UTC m=+1931.936684740" lastFinishedPulling="2026-01-22 17:00:43.224974912 +0000 UTC m=+1935.869521612" observedRunningTime="2026-01-22 17:00:43.625037499 +0000 UTC m=+1936.269584199" watchObservedRunningTime="2026-01-22 17:00:43.635728072 +0000 UTC m=+1936.280274772" Jan 22 17:00:43 crc kubenswrapper[4704]: I0122 17:00:43.974939 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:45 crc kubenswrapper[4704]: I0122 17:00:45.159575 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:46 crc kubenswrapper[4704]: I0122 17:00:46.384306 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:47 crc kubenswrapper[4704]: I0122 17:00:47.563536 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:48 crc kubenswrapper[4704]: I0122 17:00:48.755624 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:49 crc kubenswrapper[4704]: I0122 17:00:49.086125 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:00:49 crc kubenswrapper[4704]: I0122 17:00:49.086178 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:00:49 crc kubenswrapper[4704]: I0122 17:00:49.983755 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:51 crc kubenswrapper[4704]: I0122 17:00:51.220958 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:52 crc kubenswrapper[4704]: I0122 17:00:52.445351 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:53 crc kubenswrapper[4704]: I0122 17:00:53.674766 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:54 crc kubenswrapper[4704]: I0122 17:00:54.903381 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:56 crc kubenswrapper[4704]: I0122 17:00:56.089676 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:56 crc kubenswrapper[4704]: I0122 17:00:56.717906 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-42hqh" event={"ID":"30b97177-c5dd-4e1c-bc12-a24678377554","Type":"ContainerStarted","Data":"613c624a94aa89dce0e5f7c167a07454e81f7f4468cc7b95f6a508b7e633c91a"} Jan 22 17:00:57 crc kubenswrapper[4704]: I0122 17:00:57.300271 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:58 crc kubenswrapper[4704]: I0122 17:00:58.504184 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:00:59 crc kubenswrapper[4704]: I0122 17:00:59.680029 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.145185 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-sync-42hqh" podStartSLOduration=5.508610794 podStartE2EDuration="20.14515342s" podCreationTimestamp="2026-01-22 17:00:40 +0000 UTC" firstStartedPulling="2026-01-22 17:00:41.026742253 +0000 UTC m=+1933.671288943" lastFinishedPulling="2026-01-22 17:00:55.663284869 +0000 UTC m=+1948.307831569" observedRunningTime="2026-01-22 17:00:56.732079215 +0000 UTC m=+1949.376625915" watchObservedRunningTime="2026-01-22 17:01:00.14515342 +0000 UTC m=+1952.789700120" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.168247 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-cron-29485021-xqqxb"] Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.169279 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.184974 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29485021-xqqxb"] Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.262509 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-config-data\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.263090 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-combined-ca-bundle\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.263226 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-fernet-keys\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.263375 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-cert-memcached-mtls\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.263466 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gq9j\" (UniqueName: \"kubernetes.io/projected/b9789623-d528-4ee3-bb97-c687256c928c-kube-api-access-2gq9j\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.365609 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-config-data\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.365668 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-combined-ca-bundle\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.365702 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-fernet-keys\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.365883 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-cert-memcached-mtls\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.365933 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq9j\" (UniqueName: \"kubernetes.io/projected/b9789623-d528-4ee3-bb97-c687256c928c-kube-api-access-2gq9j\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.371743 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-config-data\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.374767 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-cert-memcached-mtls\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.376562 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-combined-ca-bundle\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.379425 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-fernet-keys\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.401032 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq9j\" (UniqueName: \"kubernetes.io/projected/b9789623-d528-4ee3-bb97-c687256c928c-kube-api-access-2gq9j\") pod \"keystone-cron-29485021-xqqxb\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.499127 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.755548 4704 generic.go:334] "Generic (PLEG): container finished" podID="30b97177-c5dd-4e1c-bc12-a24678377554" containerID="613c624a94aa89dce0e5f7c167a07454e81f7f4468cc7b95f6a508b7e633c91a" exitCode=0 Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.755651 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-42hqh" event={"ID":"30b97177-c5dd-4e1c-bc12-a24678377554","Type":"ContainerDied","Data":"613c624a94aa89dce0e5f7c167a07454e81f7f4468cc7b95f6a508b7e633c91a"} Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.882940 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:00 crc kubenswrapper[4704]: I0122 17:01:00.981702 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29485021-xqqxb"] Jan 22 17:01:01 crc kubenswrapper[4704]: I0122 17:01:01.767527 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" event={"ID":"b9789623-d528-4ee3-bb97-c687256c928c","Type":"ContainerStarted","Data":"813514ee1879973f903e660b1dc65e853a0bfee1e62d6e142df94fdfc1ad6a9c"} Jan 22 17:01:01 crc kubenswrapper[4704]: I0122 17:01:01.767876 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" event={"ID":"b9789623-d528-4ee3-bb97-c687256c928c","Type":"ContainerStarted","Data":"3c3bb51409b5add9b40b63ef0caa69577a70c59a43298fdd0073c5a8695e96e4"} Jan 22 17:01:01 crc kubenswrapper[4704]: I0122 17:01:01.796834 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" podStartSLOduration=1.79676265 podStartE2EDuration="1.79676265s" podCreationTimestamp="2026-01-22 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:01.787725924 +0000 UTC m=+1954.432272644" watchObservedRunningTime="2026-01-22 17:01:01.79676265 +0000 UTC m=+1954.441309390" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.060396 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.245143 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.300785 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-658tr\" (UniqueName: \"kubernetes.io/projected/30b97177-c5dd-4e1c-bc12-a24678377554-kube-api-access-658tr\") pod \"30b97177-c5dd-4e1c-bc12-a24678377554\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.300853 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-combined-ca-bundle\") pod \"30b97177-c5dd-4e1c-bc12-a24678377554\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.300875 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-config-data\") pod \"30b97177-c5dd-4e1c-bc12-a24678377554\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.300929 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-scripts\") pod \"30b97177-c5dd-4e1c-bc12-a24678377554\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.300943 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-db-sync-config-data\") pod \"30b97177-c5dd-4e1c-bc12-a24678377554\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.300994 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b97177-c5dd-4e1c-bc12-a24678377554-etc-machine-id\") pod \"30b97177-c5dd-4e1c-bc12-a24678377554\" (UID: \"30b97177-c5dd-4e1c-bc12-a24678377554\") " Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.301452 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30b97177-c5dd-4e1c-bc12-a24678377554-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30b97177-c5dd-4e1c-bc12-a24678377554" (UID: "30b97177-c5dd-4e1c-bc12-a24678377554"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.302055 4704 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30b97177-c5dd-4e1c-bc12-a24678377554-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.306480 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b97177-c5dd-4e1c-bc12-a24678377554-kube-api-access-658tr" (OuterVolumeSpecName: "kube-api-access-658tr") pod "30b97177-c5dd-4e1c-bc12-a24678377554" (UID: "30b97177-c5dd-4e1c-bc12-a24678377554"). InnerVolumeSpecName "kube-api-access-658tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.306738 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "30b97177-c5dd-4e1c-bc12-a24678377554" (UID: "30b97177-c5dd-4e1c-bc12-a24678377554"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.307597 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-scripts" (OuterVolumeSpecName: "scripts") pod "30b97177-c5dd-4e1c-bc12-a24678377554" (UID: "30b97177-c5dd-4e1c-bc12-a24678377554"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.326944 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30b97177-c5dd-4e1c-bc12-a24678377554" (UID: "30b97177-c5dd-4e1c-bc12-a24678377554"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.357034 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-config-data" (OuterVolumeSpecName: "config-data") pod "30b97177-c5dd-4e1c-bc12-a24678377554" (UID: "30b97177-c5dd-4e1c-bc12-a24678377554"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.403948 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.403980 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.403990 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.403999 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/30b97177-c5dd-4e1c-bc12-a24678377554-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.404009 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-658tr\" (UniqueName: \"kubernetes.io/projected/30b97177-c5dd-4e1c-bc12-a24678377554-kube-api-access-658tr\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.778704 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-42hqh" event={"ID":"30b97177-c5dd-4e1c-bc12-a24678377554","Type":"ContainerDied","Data":"29dcaca86793b85969ed118594925d8da3e5c4b4a853d2227a24afd2723c5b2f"} Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.779700 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29dcaca86793b85969ed118594925d8da3e5c4b4a853d2227a24afd2723c5b2f" Jan 22 17:01:02 crc kubenswrapper[4704]: I0122 17:01:02.778741 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-42hqh" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.098683 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:03 crc kubenswrapper[4704]: E0122 17:01:03.099456 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b97177-c5dd-4e1c-bc12-a24678377554" containerName="cinder-db-sync" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.099475 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b97177-c5dd-4e1c-bc12-a24678377554" containerName="cinder-db-sync" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.099649 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b97177-c5dd-4e1c-bc12-a24678377554" containerName="cinder-db-sync" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.100742 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.105087 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.108184 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-cnbkb" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.108831 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.108922 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.114667 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.116009 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.124317 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.131034 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.138680 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218528 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218575 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-dev\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218609 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218644 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218674 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218695 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218712 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-lib-modules\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218738 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-scripts\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218870 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-sys\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218906 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218927 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.218981 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219004 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkgt2\" (UniqueName: \"kubernetes.io/projected/2783df74-2490-4583-b996-0b3795cc503b-kube-api-access-fkgt2\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219029 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219119 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219210 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219261 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219284 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219305 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b0b39b-1e4c-4668-a833-1d54167690d7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219331 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219386 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcd4l\" (UniqueName: \"kubernetes.io/projected/b2b0b39b-1e4c-4668-a833-1d54167690d7-kube-api-access-qcd4l\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219422 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.219492 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-run\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.248033 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.263254 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.264850 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.271119 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.276290 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321001 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-scripts\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321051 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321068 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-sys\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321084 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321099 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321119 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321135 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkgt2\" (UniqueName: \"kubernetes.io/projected/2783df74-2490-4583-b996-0b3795cc503b-kube-api-access-fkgt2\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321153 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321174 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321195 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321237 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321274 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321289 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-scripts\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321312 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321331 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b0b39b-1e4c-4668-a833-1d54167690d7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321350 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321366 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a406690-1947-43ac-92da-34c51d2076a6-logs\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321396 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcd4l\" (UniqueName: \"kubernetes.io/projected/b2b0b39b-1e4c-4668-a833-1d54167690d7-kube-api-access-qcd4l\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321411 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5pmd\" (UniqueName: \"kubernetes.io/projected/6a406690-1947-43ac-92da-34c51d2076a6-kube-api-access-f5pmd\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321430 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a406690-1947-43ac-92da-34c51d2076a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321449 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321481 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-run\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321499 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321519 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-dev\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321536 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321552 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321568 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321595 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321615 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321632 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321647 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-lib-modules\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.321728 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-lib-modules\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.326917 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-scripts\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.327068 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.327141 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-dev\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.327727 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.327774 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.328102 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.328155 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b0b39b-1e4c-4668-a833-1d54167690d7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.328307 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.328395 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.328472 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.328503 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-run\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.328523 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-sys\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.335245 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.335679 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.335741 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.336370 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.336891 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.339122 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.343375 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.347076 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcd4l\" (UniqueName: \"kubernetes.io/projected/b2b0b39b-1e4c-4668-a833-1d54167690d7-kube-api-access-qcd4l\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.349372 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkgt2\" (UniqueName: \"kubernetes.io/projected/2783df74-2490-4583-b996-0b3795cc503b-kube-api-access-fkgt2\") pod \"cinder-backup-0\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.353510 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423155 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423195 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423240 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423269 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423305 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-scripts\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423325 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a406690-1947-43ac-92da-34c51d2076a6-logs\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423358 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5pmd\" (UniqueName: \"kubernetes.io/projected/6a406690-1947-43ac-92da-34c51d2076a6-kube-api-access-f5pmd\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423377 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a406690-1947-43ac-92da-34c51d2076a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.423464 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a406690-1947-43ac-92da-34c51d2076a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.424219 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a406690-1947-43ac-92da-34c51d2076a6-logs\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.424474 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.434056 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.436521 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.436669 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.436857 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.447420 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-scripts\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.447917 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.459087 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5pmd\" (UniqueName: \"kubernetes.io/projected/6a406690-1947-43ac-92da-34c51d2076a6-kube-api-access-f5pmd\") pod \"cinder-api-0\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.592658 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.801616 4704 generic.go:334] "Generic (PLEG): container finished" podID="b9789623-d528-4ee3-bb97-c687256c928c" containerID="813514ee1879973f903e660b1dc65e853a0bfee1e62d6e142df94fdfc1ad6a9c" exitCode=0 Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.801682 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" event={"ID":"b9789623-d528-4ee3-bb97-c687256c928c","Type":"ContainerDied","Data":"813514ee1879973f903e660b1dc65e853a0bfee1e62d6e142df94fdfc1ad6a9c"} Jan 22 17:01:03 crc kubenswrapper[4704]: I0122 17:01:03.954226 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:04 crc kubenswrapper[4704]: I0122 17:01:04.105620 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:04 crc kubenswrapper[4704]: W0122 17:01:04.111997 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a406690_1947_43ac_92da_34c51d2076a6.slice/crio-201e385fb931a72f874ff6e42e3363d364525cdff7af25c29b4a0660f5c94299 WatchSource:0}: Error finding container 201e385fb931a72f874ff6e42e3363d364525cdff7af25c29b4a0660f5c94299: Status 404 returned error can't find the container with id 201e385fb931a72f874ff6e42e3363d364525cdff7af25c29b4a0660f5c94299 Jan 22 17:01:04 crc kubenswrapper[4704]: I0122 17:01:04.126571 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:04 crc kubenswrapper[4704]: W0122 17:01:04.147944 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2783df74_2490_4583_b996_0b3795cc503b.slice/crio-6ff888cc9cafab1e6f2072f8073542021190146669eb96cbe6b8c5c4e003e98e WatchSource:0}: Error finding container 6ff888cc9cafab1e6f2072f8073542021190146669eb96cbe6b8c5c4e003e98e: Status 404 returned error can't find the container with id 6ff888cc9cafab1e6f2072f8073542021190146669eb96cbe6b8c5c4e003e98e Jan 22 17:01:04 crc kubenswrapper[4704]: I0122 17:01:04.423319 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:04 crc kubenswrapper[4704]: I0122 17:01:04.810641 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"b2b0b39b-1e4c-4668-a833-1d54167690d7","Type":"ContainerStarted","Data":"0b91cc86380d6205c296255b9800ba5b95be94021e84e28ed57ca16934706f5f"} Jan 22 17:01:04 crc kubenswrapper[4704]: I0122 17:01:04.812128 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"6a406690-1947-43ac-92da-34c51d2076a6","Type":"ContainerStarted","Data":"9e7f0d8d10a724c3db98b89e9c67e200a48dbce3472d07b37b8e7817f476526c"} Jan 22 17:01:04 crc kubenswrapper[4704]: I0122 17:01:04.812221 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"6a406690-1947-43ac-92da-34c51d2076a6","Type":"ContainerStarted","Data":"201e385fb931a72f874ff6e42e3363d364525cdff7af25c29b4a0660f5c94299"} Jan 22 17:01:04 crc kubenswrapper[4704]: I0122 17:01:04.813620 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"2783df74-2490-4583-b996-0b3795cc503b","Type":"ContainerStarted","Data":"6ff888cc9cafab1e6f2072f8073542021190146669eb96cbe6b8c5c4e003e98e"} Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.196189 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.279672 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-cert-memcached-mtls\") pod \"b9789623-d528-4ee3-bb97-c687256c928c\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.279730 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-fernet-keys\") pod \"b9789623-d528-4ee3-bb97-c687256c928c\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.279755 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-config-data\") pod \"b9789623-d528-4ee3-bb97-c687256c928c\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.280302 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gq9j\" (UniqueName: \"kubernetes.io/projected/b9789623-d528-4ee3-bb97-c687256c928c-kube-api-access-2gq9j\") pod \"b9789623-d528-4ee3-bb97-c687256c928c\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.280328 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-combined-ca-bundle\") pod \"b9789623-d528-4ee3-bb97-c687256c928c\" (UID: \"b9789623-d528-4ee3-bb97-c687256c928c\") " Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.283599 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9789623-d528-4ee3-bb97-c687256c928c-kube-api-access-2gq9j" (OuterVolumeSpecName: "kube-api-access-2gq9j") pod "b9789623-d528-4ee3-bb97-c687256c928c" (UID: "b9789623-d528-4ee3-bb97-c687256c928c"). InnerVolumeSpecName "kube-api-access-2gq9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.285287 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b9789623-d528-4ee3-bb97-c687256c928c" (UID: "b9789623-d528-4ee3-bb97-c687256c928c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.336870 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9789623-d528-4ee3-bb97-c687256c928c" (UID: "b9789623-d528-4ee3-bb97-c687256c928c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.348940 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-config-data" (OuterVolumeSpecName: "config-data") pod "b9789623-d528-4ee3-bb97-c687256c928c" (UID: "b9789623-d528-4ee3-bb97-c687256c928c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.354884 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b9789623-d528-4ee3-bb97-c687256c928c" (UID: "b9789623-d528-4ee3-bb97-c687256c928c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.405904 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.405943 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gq9j\" (UniqueName: \"kubernetes.io/projected/b9789623-d528-4ee3-bb97-c687256c928c-kube-api-access-2gq9j\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.405956 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.405969 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.405982 4704 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9789623-d528-4ee3-bb97-c687256c928c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.475204 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.616536 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.835204 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" event={"ID":"b9789623-d528-4ee3-bb97-c687256c928c","Type":"ContainerDied","Data":"3c3bb51409b5add9b40b63ef0caa69577a70c59a43298fdd0073c5a8695e96e4"} Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.835535 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3bb51409b5add9b40b63ef0caa69577a70c59a43298fdd0073c5a8695e96e4" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.835293 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29485021-xqqxb" Jan 22 17:01:05 crc kubenswrapper[4704]: I0122 17:01:05.843523 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"2783df74-2490-4583-b996-0b3795cc503b","Type":"ContainerStarted","Data":"dd7ddd79345d15c2b59dd04afd18f1b9b5da5f29e05f7028b50938ee5031072e"} Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.818096 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.852019 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"2783df74-2490-4583-b996-0b3795cc503b","Type":"ContainerStarted","Data":"19a2a6a94715d055458e14ffd1667438670739e57c1c093a53a4778117ab415d"} Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.855943 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"b2b0b39b-1e4c-4668-a833-1d54167690d7","Type":"ContainerStarted","Data":"ea4a958e7468a60615372cfb2e1c7458c3fec8f694854f94a69ae465f4d8afe8"} Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.855982 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"b2b0b39b-1e4c-4668-a833-1d54167690d7","Type":"ContainerStarted","Data":"0f62634f392898a9742e216b1bd311a78130f960a3c6638fc5711f34d33a9682"} Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.859371 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"6a406690-1947-43ac-92da-34c51d2076a6","Type":"ContainerStarted","Data":"f1e77dc0e26e0e8754f691f9eb92f8f2a3d8c3402a398a05e693ca2eab20603e"} Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.859479 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api-log" containerID="cri-o://9e7f0d8d10a724c3db98b89e9c67e200a48dbce3472d07b37b8e7817f476526c" gracePeriod=30 Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.859663 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.859696 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api" containerID="cri-o://f1e77dc0e26e0e8754f691f9eb92f8f2a3d8c3402a398a05e693ca2eab20603e" gracePeriod=30 Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.935025 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.854464861 podStartE2EDuration="3.934997269s" podCreationTimestamp="2026-01-22 17:01:03 +0000 UTC" firstStartedPulling="2026-01-22 17:01:04.150347782 +0000 UTC m=+1956.794894482" lastFinishedPulling="2026-01-22 17:01:05.23088019 +0000 UTC m=+1957.875426890" observedRunningTime="2026-01-22 17:01:06.87183892 +0000 UTC m=+1959.516385620" watchObservedRunningTime="2026-01-22 17:01:06.934997269 +0000 UTC m=+1959.579543969" Jan 22 17:01:06 crc kubenswrapper[4704]: I0122 17:01:06.955713 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=3.955682765 podStartE2EDuration="3.955682765s" podCreationTimestamp="2026-01-22 17:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:06.90183101 +0000 UTC m=+1959.546377740" watchObservedRunningTime="2026-01-22 17:01:06.955682765 +0000 UTC m=+1959.600229465" Jan 22 17:01:07 crc kubenswrapper[4704]: I0122 17:01:07.004899 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=2.799428902 podStartE2EDuration="4.00487945s" podCreationTimestamp="2026-01-22 17:01:03 +0000 UTC" firstStartedPulling="2026-01-22 17:01:03.956989873 +0000 UTC m=+1956.601536573" lastFinishedPulling="2026-01-22 17:01:05.162440421 +0000 UTC m=+1957.806987121" observedRunningTime="2026-01-22 17:01:06.949753057 +0000 UTC m=+1959.594299757" watchObservedRunningTime="2026-01-22 17:01:07.00487945 +0000 UTC m=+1959.649426150" Jan 22 17:01:07 crc kubenswrapper[4704]: I0122 17:01:07.872542 4704 generic.go:334] "Generic (PLEG): container finished" podID="6a406690-1947-43ac-92da-34c51d2076a6" containerID="f1e77dc0e26e0e8754f691f9eb92f8f2a3d8c3402a398a05e693ca2eab20603e" exitCode=0 Jan 22 17:01:07 crc kubenswrapper[4704]: I0122 17:01:07.873087 4704 generic.go:334] "Generic (PLEG): container finished" podID="6a406690-1947-43ac-92da-34c51d2076a6" containerID="9e7f0d8d10a724c3db98b89e9c67e200a48dbce3472d07b37b8e7817f476526c" exitCode=143 Jan 22 17:01:07 crc kubenswrapper[4704]: I0122 17:01:07.872647 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"6a406690-1947-43ac-92da-34c51d2076a6","Type":"ContainerDied","Data":"f1e77dc0e26e0e8754f691f9eb92f8f2a3d8c3402a398a05e693ca2eab20603e"} Jan 22 17:01:07 crc kubenswrapper[4704]: I0122 17:01:07.873336 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"6a406690-1947-43ac-92da-34c51d2076a6","Type":"ContainerDied","Data":"9e7f0d8d10a724c3db98b89e9c67e200a48dbce3472d07b37b8e7817f476526c"} Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.017764 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.056773 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.154916 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-cert-memcached-mtls\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.154998 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a406690-1947-43ac-92da-34c51d2076a6-logs\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155028 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data-custom\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155103 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a406690-1947-43ac-92da-34c51d2076a6-etc-machine-id\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155138 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-scripts\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155216 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-combined-ca-bundle\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155250 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5pmd\" (UniqueName: \"kubernetes.io/projected/6a406690-1947-43ac-92da-34c51d2076a6-kube-api-access-f5pmd\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155309 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data\") pod \"6a406690-1947-43ac-92da-34c51d2076a6\" (UID: \"6a406690-1947-43ac-92da-34c51d2076a6\") " Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155448 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a406690-1947-43ac-92da-34c51d2076a6-logs" (OuterVolumeSpecName: "logs") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.155637 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a406690-1947-43ac-92da-34c51d2076a6-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.159989 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a406690-1947-43ac-92da-34c51d2076a6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.161422 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.168961 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-scripts" (OuterVolumeSpecName: "scripts") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.182674 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a406690-1947-43ac-92da-34c51d2076a6-kube-api-access-f5pmd" (OuterVolumeSpecName: "kube-api-access-f5pmd") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "kube-api-access-f5pmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.187036 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.227021 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data" (OuterVolumeSpecName: "config-data") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.237359 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6a406690-1947-43ac-92da-34c51d2076a6" (UID: "6a406690-1947-43ac-92da-34c51d2076a6"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.257479 4704 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a406690-1947-43ac-92da-34c51d2076a6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.257520 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.257533 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.257543 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5pmd\" (UniqueName: \"kubernetes.io/projected/6a406690-1947-43ac-92da-34c51d2076a6-kube-api-access-f5pmd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.257553 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.257561 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.257570 4704 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a406690-1947-43ac-92da-34c51d2076a6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.425244 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.437699 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.884160 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"6a406690-1947-43ac-92da-34c51d2076a6","Type":"ContainerDied","Data":"201e385fb931a72f874ff6e42e3363d364525cdff7af25c29b4a0660f5c94299"} Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.884471 4704 scope.go:117] "RemoveContainer" containerID="f1e77dc0e26e0e8754f691f9eb92f8f2a3d8c3402a398a05e693ca2eab20603e" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.884251 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.920087 4704 scope.go:117] "RemoveContainer" containerID="9e7f0d8d10a724c3db98b89e9c67e200a48dbce3472d07b37b8e7817f476526c" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.920335 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.926350 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.935329 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.984718 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:08 crc kubenswrapper[4704]: E0122 17:01:08.985320 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api-log" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.985345 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api-log" Jan 22 17:01:08 crc kubenswrapper[4704]: E0122 17:01:08.985371 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.985380 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api" Jan 22 17:01:08 crc kubenswrapper[4704]: E0122 17:01:08.985437 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9789623-d528-4ee3-bb97-c687256c928c" containerName="keystone-cron" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.985448 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9789623-d528-4ee3-bb97-c687256c928c" containerName="keystone-cron" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.985671 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.985707 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9789623-d528-4ee3-bb97-c687256c928c" containerName="keystone-cron" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.985724 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a406690-1947-43ac-92da-34c51d2076a6" containerName="cinder-api-log" Jan 22 17:01:08 crc kubenswrapper[4704]: I0122 17:01:08.987159 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:08.996227 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-internal-svc" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:08.996358 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:08.996537 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-public-svc" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.005950 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.071609 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsxt\" (UniqueName: \"kubernetes.io/projected/39baf79a-d188-48e5-ba61-addf254f1257-kube-api-access-2hsxt\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.071662 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.071698 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data-custom\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.071878 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.071934 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39baf79a-d188-48e5-ba61-addf254f1257-logs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.072023 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-public-tls-certs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.072082 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.072108 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39baf79a-d188-48e5-ba61-addf254f1257-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.072134 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.072160 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-scripts\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174251 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hsxt\" (UniqueName: \"kubernetes.io/projected/39baf79a-d188-48e5-ba61-addf254f1257-kube-api-access-2hsxt\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174321 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174367 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data-custom\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174470 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174518 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39baf79a-d188-48e5-ba61-addf254f1257-logs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174601 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-public-tls-certs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174674 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174711 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39baf79a-d188-48e5-ba61-addf254f1257-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174741 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174774 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-scripts\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.174918 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39baf79a-d188-48e5-ba61-addf254f1257-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.175063 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39baf79a-d188-48e5-ba61-addf254f1257-logs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.179033 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.179257 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.179721 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-scripts\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.180602 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data-custom\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.181250 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-public-tls-certs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.184842 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.188528 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.202754 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hsxt\" (UniqueName: \"kubernetes.io/projected/39baf79a-d188-48e5-ba61-addf254f1257-kube-api-access-2hsxt\") pod \"cinder-api-0\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.242817 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.319125 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.646840 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a406690-1947-43ac-92da-34c51d2076a6" path="/var/lib/kubelet/pods/6a406690-1947-43ac-92da-34c51d2076a6/volumes" Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.777727 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:09 crc kubenswrapper[4704]: W0122 17:01:09.788013 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39baf79a_d188_48e5_ba61_addf254f1257.slice/crio-5eb01e8e9752ff1d9568d3bc72d829f7ddc33af8f21dbe3d80dbbeb37988a302 WatchSource:0}: Error finding container 5eb01e8e9752ff1d9568d3bc72d829f7ddc33af8f21dbe3d80dbbeb37988a302: Status 404 returned error can't find the container with id 5eb01e8e9752ff1d9568d3bc72d829f7ddc33af8f21dbe3d80dbbeb37988a302 Jan 22 17:01:09 crc kubenswrapper[4704]: I0122 17:01:09.895871 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"39baf79a-d188-48e5-ba61-addf254f1257","Type":"ContainerStarted","Data":"5eb01e8e9752ff1d9568d3bc72d829f7ddc33af8f21dbe3d80dbbeb37988a302"} Jan 22 17:01:10 crc kubenswrapper[4704]: I0122 17:01:10.487268 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:10 crc kubenswrapper[4704]: I0122 17:01:10.915402 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"39baf79a-d188-48e5-ba61-addf254f1257","Type":"ContainerStarted","Data":"990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981"} Jan 22 17:01:11 crc kubenswrapper[4704]: I0122 17:01:11.685028 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:11 crc kubenswrapper[4704]: I0122 17:01:11.923668 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"39baf79a-d188-48e5-ba61-addf254f1257","Type":"ContainerStarted","Data":"403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19"} Jan 22 17:01:11 crc kubenswrapper[4704]: I0122 17:01:11.923809 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:11 crc kubenswrapper[4704]: I0122 17:01:11.946098 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=3.946081444 podStartE2EDuration="3.946081444s" podCreationTimestamp="2026-01-22 17:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:11.940213468 +0000 UTC m=+1964.584760168" watchObservedRunningTime="2026-01-22 17:01:11.946081444 +0000 UTC m=+1964.590628144" Jan 22 17:01:12 crc kubenswrapper[4704]: I0122 17:01:12.870270 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.614335 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.623763 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.662646 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.678198 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.942453 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="cinder-backup" containerID="cri-o://dd7ddd79345d15c2b59dd04afd18f1b9b5da5f29e05f7028b50938ee5031072e" gracePeriod=30 Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.942651 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="probe" containerID="cri-o://ea4a958e7468a60615372cfb2e1c7458c3fec8f694854f94a69ae465f4d8afe8" gracePeriod=30 Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.942589 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="cinder-scheduler" containerID="cri-o://0f62634f392898a9742e216b1bd311a78130f960a3c6638fc5711f34d33a9682" gracePeriod=30 Jan 22 17:01:13 crc kubenswrapper[4704]: I0122 17:01:13.942558 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="probe" containerID="cri-o://19a2a6a94715d055458e14ffd1667438670739e57c1c093a53a4778117ab415d" gracePeriod=30 Jan 22 17:01:14 crc kubenswrapper[4704]: I0122 17:01:14.030174 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.227690 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.399304 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.399911 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="653f8f63-8758-4a25-a51b-20169bfbce50" containerName="watcher-decision-engine" containerID="cri-o://639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6" gracePeriod=30 Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.959716 4704 generic.go:334] "Generic (PLEG): container finished" podID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerID="ea4a958e7468a60615372cfb2e1c7458c3fec8f694854f94a69ae465f4d8afe8" exitCode=0 Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.959773 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"b2b0b39b-1e4c-4668-a833-1d54167690d7","Type":"ContainerDied","Data":"ea4a958e7468a60615372cfb2e1c7458c3fec8f694854f94a69ae465f4d8afe8"} Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.961573 4704 generic.go:334] "Generic (PLEG): container finished" podID="2783df74-2490-4583-b996-0b3795cc503b" containerID="19a2a6a94715d055458e14ffd1667438670739e57c1c093a53a4778117ab415d" exitCode=0 Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.961688 4704 generic.go:334] "Generic (PLEG): container finished" podID="2783df74-2490-4583-b996-0b3795cc503b" containerID="dd7ddd79345d15c2b59dd04afd18f1b9b5da5f29e05f7028b50938ee5031072e" exitCode=0 Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.961651 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"2783df74-2490-4583-b996-0b3795cc503b","Type":"ContainerDied","Data":"19a2a6a94715d055458e14ffd1667438670739e57c1c093a53a4778117ab415d"} Jan 22 17:01:15 crc kubenswrapper[4704]: I0122 17:01:15.961841 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"2783df74-2490-4583-b996-0b3795cc503b","Type":"ContainerDied","Data":"dd7ddd79345d15c2b59dd04afd18f1b9b5da5f29e05f7028b50938ee5031072e"} Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.351375 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.430355 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500442 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-dev\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500519 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-run\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500545 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-lib-modules\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500589 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data-custom\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500606 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-combined-ca-bundle\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500659 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-cert-memcached-mtls\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500681 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-lib-cinder\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500711 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500870 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-run" (OuterVolumeSpecName: "run") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500901 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-dev" (OuterVolumeSpecName: "dev") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500901 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.500965 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501609 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-machine-id\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501669 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-brick\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501691 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-nvme\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501704 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-sys\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501729 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-scripts\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501767 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkgt2\" (UniqueName: \"kubernetes.io/projected/2783df74-2490-4583-b996-0b3795cc503b-kube-api-access-fkgt2\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501805 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-cinder\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501863 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-iscsi\") pod \"2783df74-2490-4583-b996-0b3795cc503b\" (UID: \"2783df74-2490-4583-b996-0b3795cc503b\") " Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501946 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-sys" (OuterVolumeSpecName: "sys") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.501979 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502000 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502018 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502046 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502378 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502527 4704 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502543 4704 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-dev\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502553 4704 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-run\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502561 4704 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502569 4704 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502578 4704 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502586 4704 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502594 4704 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502604 4704 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-sys\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.502611 4704 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2783df74-2490-4583-b996-0b3795cc503b-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.506747 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.512097 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-scripts" (OuterVolumeSpecName: "scripts") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.512281 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2783df74-2490-4583-b996-0b3795cc503b-kube-api-access-fkgt2" (OuterVolumeSpecName: "kube-api-access-fkgt2") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "kube-api-access-fkgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.535413 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.535762 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-central-agent" containerID="cri-o://85ce20a5f0a0c8aa1b6a12a678f33ec9de874c06ac1a8b7c5050afd74a40eea8" gracePeriod=30 Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.535870 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="proxy-httpd" containerID="cri-o://bf55c0ce75b66f26f88fb2a825fb47999f6e655610a2300b0b4d1a8ff2e8769f" gracePeriod=30 Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.536022 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="sg-core" containerID="cri-o://1575cfdfd8c36defc6b08cdb3a5e7ee4f2bb9f4c6c2241af64706efb3b0f6112" gracePeriod=30 Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.536069 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-notification-agent" containerID="cri-o://c94d9849c3ce2e7d0f909583a170a3fcf0a99662febc2b2fb44beb15f503125a" gracePeriod=30 Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.596309 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.604427 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkgt2\" (UniqueName: \"kubernetes.io/projected/2783df74-2490-4583-b996-0b3795cc503b-kube-api-access-fkgt2\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.604456 4704 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.604464 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.604473 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.620102 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data" (OuterVolumeSpecName: "config-data") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.677616 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2783df74-2490-4583-b996-0b3795cc503b" (UID: "2783df74-2490-4583-b996-0b3795cc503b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.705771 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.706007 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2783df74-2490-4583-b996-0b3795cc503b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.973512 4704 generic.go:334] "Generic (PLEG): container finished" podID="544df65b-383c-41da-94b8-914c47c3e146" containerID="bf55c0ce75b66f26f88fb2a825fb47999f6e655610a2300b0b4d1a8ff2e8769f" exitCode=0 Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.973713 4704 generic.go:334] "Generic (PLEG): container finished" podID="544df65b-383c-41da-94b8-914c47c3e146" containerID="1575cfdfd8c36defc6b08cdb3a5e7ee4f2bb9f4c6c2241af64706efb3b0f6112" exitCode=2 Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.973859 4704 generic.go:334] "Generic (PLEG): container finished" podID="544df65b-383c-41da-94b8-914c47c3e146" containerID="85ce20a5f0a0c8aa1b6a12a678f33ec9de874c06ac1a8b7c5050afd74a40eea8" exitCode=0 Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.973723 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerDied","Data":"bf55c0ce75b66f26f88fb2a825fb47999f6e655610a2300b0b4d1a8ff2e8769f"} Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.974057 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerDied","Data":"1575cfdfd8c36defc6b08cdb3a5e7ee4f2bb9f4c6c2241af64706efb3b0f6112"} Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.974117 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerDied","Data":"85ce20a5f0a0c8aa1b6a12a678f33ec9de874c06ac1a8b7c5050afd74a40eea8"} Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.976040 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"2783df74-2490-4583-b996-0b3795cc503b","Type":"ContainerDied","Data":"6ff888cc9cafab1e6f2072f8073542021190146669eb96cbe6b8c5c4e003e98e"} Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.976107 4704 scope.go:117] "RemoveContainer" containerID="19a2a6a94715d055458e14ffd1667438670739e57c1c093a53a4778117ab415d" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.976183 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:16 crc kubenswrapper[4704]: I0122 17:01:16.997688 4704 scope.go:117] "RemoveContainer" containerID="dd7ddd79345d15c2b59dd04afd18f1b9b5da5f29e05f7028b50938ee5031072e" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.010287 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.017910 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.039153 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:17 crc kubenswrapper[4704]: E0122 17:01:17.039585 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="cinder-backup" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.039633 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="cinder-backup" Jan 22 17:01:17 crc kubenswrapper[4704]: E0122 17:01:17.039656 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="probe" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.039662 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="probe" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.039915 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="cinder-backup" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.039955 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2783df74-2490-4583-b996-0b3795cc503b" containerName="probe" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.043095 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.046400 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.052023 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215083 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215121 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215156 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-dev\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215179 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-lib-modules\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215201 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215229 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxx2b\" (UniqueName: \"kubernetes.io/projected/41a036fd-d350-49ff-8d77-3ee76652a92f-kube-api-access-cxx2b\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215247 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215270 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215311 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215327 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-run\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215351 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215403 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-scripts\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215427 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215455 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215470 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-sys\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.215489 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.316948 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.316987 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-sys\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317011 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317033 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317219 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317249 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-dev\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317311 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-dev\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317332 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-lib-modules\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317187 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-nvme\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317098 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317285 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317316 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-lib-modules\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317445 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317098 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-sys\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317098 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317519 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317542 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxx2b\" (UniqueName: \"kubernetes.io/projected/41a036fd-d350-49ff-8d77-3ee76652a92f-kube-api-access-cxx2b\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317576 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317618 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.317648 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.318212 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-run\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.318252 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-run\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.318254 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.318384 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-scripts\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.318439 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.318589 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.322172 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.322217 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-scripts\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.322421 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data-custom\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.322754 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.324711 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.332711 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxx2b\" (UniqueName: \"kubernetes.io/projected/41a036fd-d350-49ff-8d77-3ee76652a92f-kube-api-access-cxx2b\") pod \"cinder-backup-0\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.389041 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.619509 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_653f8f63-8758-4a25-a51b-20169bfbce50/watcher-decision-engine/0.log" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.657235 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2783df74-2490-4583-b996-0b3795cc503b" path="/var/lib/kubelet/pods/2783df74-2490-4583-b996-0b3795cc503b/volumes" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.837554 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.873081 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.941534 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-custom-prometheus-ca\") pod \"653f8f63-8758-4a25-a51b-20169bfbce50\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.941806 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653f8f63-8758-4a25-a51b-20169bfbce50-logs\") pod \"653f8f63-8758-4a25-a51b-20169bfbce50\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.941980 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-combined-ca-bundle\") pod \"653f8f63-8758-4a25-a51b-20169bfbce50\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.942005 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-cert-memcached-mtls\") pod \"653f8f63-8758-4a25-a51b-20169bfbce50\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.942027 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-config-data\") pod \"653f8f63-8758-4a25-a51b-20169bfbce50\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.942096 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnfnr\" (UniqueName: \"kubernetes.io/projected/653f8f63-8758-4a25-a51b-20169bfbce50-kube-api-access-lnfnr\") pod \"653f8f63-8758-4a25-a51b-20169bfbce50\" (UID: \"653f8f63-8758-4a25-a51b-20169bfbce50\") " Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.945212 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653f8f63-8758-4a25-a51b-20169bfbce50-kube-api-access-lnfnr" (OuterVolumeSpecName: "kube-api-access-lnfnr") pod "653f8f63-8758-4a25-a51b-20169bfbce50" (UID: "653f8f63-8758-4a25-a51b-20169bfbce50"). InnerVolumeSpecName "kube-api-access-lnfnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.949000 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/653f8f63-8758-4a25-a51b-20169bfbce50-logs" (OuterVolumeSpecName: "logs") pod "653f8f63-8758-4a25-a51b-20169bfbce50" (UID: "653f8f63-8758-4a25-a51b-20169bfbce50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.972921 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "653f8f63-8758-4a25-a51b-20169bfbce50" (UID: "653f8f63-8758-4a25-a51b-20169bfbce50"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.993301 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "653f8f63-8758-4a25-a51b-20169bfbce50" (UID: "653f8f63-8758-4a25-a51b-20169bfbce50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.993336 4704 generic.go:334] "Generic (PLEG): container finished" podID="544df65b-383c-41da-94b8-914c47c3e146" containerID="c94d9849c3ce2e7d0f909583a170a3fcf0a99662febc2b2fb44beb15f503125a" exitCode=0 Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.993413 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerDied","Data":"c94d9849c3ce2e7d0f909583a170a3fcf0a99662febc2b2fb44beb15f503125a"} Jan 22 17:01:17 crc kubenswrapper[4704]: I0122 17:01:17.999645 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"41a036fd-d350-49ff-8d77-3ee76652a92f","Type":"ContainerStarted","Data":"292628209c336b6e68d8239670dfd8a441db8ca15a347ce1ca8463a05a5e1d2d"} Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.004262 4704 generic.go:334] "Generic (PLEG): container finished" podID="653f8f63-8758-4a25-a51b-20169bfbce50" containerID="639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6" exitCode=0 Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.004303 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"653f8f63-8758-4a25-a51b-20169bfbce50","Type":"ContainerDied","Data":"639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6"} Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.004327 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"653f8f63-8758-4a25-a51b-20169bfbce50","Type":"ContainerDied","Data":"ce99b7cd74f952b574724a16e15857bb2f04ab1948d63308a23624ef65760f82"} Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.004345 4704 scope.go:117] "RemoveContainer" containerID="639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.004492 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.040577 4704 scope.go:117] "RemoveContainer" containerID="639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.043751 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnfnr\" (UniqueName: \"kubernetes.io/projected/653f8f63-8758-4a25-a51b-20169bfbce50-kube-api-access-lnfnr\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.043772 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.043782 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653f8f63-8758-4a25-a51b-20169bfbce50-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.043806 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: E0122 17:01:18.043982 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6\": container with ID starting with 639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6 not found: ID does not exist" containerID="639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.044019 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6"} err="failed to get container status \"639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6\": rpc error: code = NotFound desc = could not find container \"639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6\": container with ID starting with 639ad8d98b711fccd9c7bb3969b97f52ccd9726c0956dfaeccbb98e8f11efbe6 not found: ID does not exist" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.044451 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-config-data" (OuterVolumeSpecName: "config-data") pod "653f8f63-8758-4a25-a51b-20169bfbce50" (UID: "653f8f63-8758-4a25-a51b-20169bfbce50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.050888 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "653f8f63-8758-4a25-a51b-20169bfbce50" (UID: "653f8f63-8758-4a25-a51b-20169bfbce50"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.145904 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.146454 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653f8f63-8758-4a25-a51b-20169bfbce50-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.160160 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.247514 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-log-httpd\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.247841 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-scripts\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.247934 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-config-data\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.247988 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9msrb\" (UniqueName: \"kubernetes.io/projected/544df65b-383c-41da-94b8-914c47c3e146-kube-api-access-9msrb\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.248013 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-combined-ca-bundle\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.248080 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-run-httpd\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.248102 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-sg-core-conf-yaml\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.248172 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-ceilometer-tls-certs\") pod \"544df65b-383c-41da-94b8-914c47c3e146\" (UID: \"544df65b-383c-41da-94b8-914c47c3e146\") " Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.248413 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.248784 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.250713 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.253556 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-scripts" (OuterVolumeSpecName: "scripts") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.254481 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544df65b-383c-41da-94b8-914c47c3e146-kube-api-access-9msrb" (OuterVolumeSpecName: "kube-api-access-9msrb") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "kube-api-access-9msrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.349976 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9msrb\" (UniqueName: \"kubernetes.io/projected/544df65b-383c-41da-94b8-914c47c3e146-kube-api-access-9msrb\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.350012 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/544df65b-383c-41da-94b8-914c47c3e146-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.350027 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.350890 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.397922 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.398789 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.445422 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-config-data" (OuterVolumeSpecName: "config-data") pod "544df65b-383c-41da-94b8-914c47c3e146" (UID: "544df65b-383c-41da-94b8-914c47c3e146"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.451216 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.451260 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.451274 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.451286 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/544df65b-383c-41da-94b8-914c47c3e146-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.526946 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.532685 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558313 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:18 crc kubenswrapper[4704]: E0122 17:01:18.558644 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="sg-core" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558662 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="sg-core" Jan 22 17:01:18 crc kubenswrapper[4704]: E0122 17:01:18.558675 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-notification-agent" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558683 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-notification-agent" Jan 22 17:01:18 crc kubenswrapper[4704]: E0122 17:01:18.558718 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653f8f63-8758-4a25-a51b-20169bfbce50" containerName="watcher-decision-engine" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558724 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="653f8f63-8758-4a25-a51b-20169bfbce50" containerName="watcher-decision-engine" Jan 22 17:01:18 crc kubenswrapper[4704]: E0122 17:01:18.558738 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="proxy-httpd" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558743 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="proxy-httpd" Jan 22 17:01:18 crc kubenswrapper[4704]: E0122 17:01:18.558753 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-central-agent" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558760 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-central-agent" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558952 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-notification-agent" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.558973 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="653f8f63-8758-4a25-a51b-20169bfbce50" containerName="watcher-decision-engine" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.559000 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="proxy-httpd" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.559014 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="ceilometer-central-agent" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.559026 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="544df65b-383c-41da-94b8-914c47c3e146" containerName="sg-core" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.559694 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.563260 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.576504 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.659181 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.659245 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.659273 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7940e-ed78-428b-ac74-1b515c5e0b71-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.659333 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpsv\" (UniqueName: \"kubernetes.io/projected/cec7940e-ed78-428b-ac74-1b515c5e0b71-kube-api-access-7jpsv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.659361 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.659391 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.760961 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.761007 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7940e-ed78-428b-ac74-1b515c5e0b71-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.761041 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jpsv\" (UniqueName: \"kubernetes.io/projected/cec7940e-ed78-428b-ac74-1b515c5e0b71-kube-api-access-7jpsv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.761070 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.761089 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.761155 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.762002 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7940e-ed78-428b-ac74-1b515c5e0b71-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.766243 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.778496 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.779228 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.782199 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jpsv\" (UniqueName: \"kubernetes.io/projected/cec7940e-ed78-428b-ac74-1b515c5e0b71-kube-api-access-7jpsv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.786380 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:18 crc kubenswrapper[4704]: I0122 17:01:18.949267 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.037351 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"544df65b-383c-41da-94b8-914c47c3e146","Type":"ContainerDied","Data":"8094b22d3dff39c86edf9731b08392bdf4257f6e1876327256b00a567d696ec0"} Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.037379 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.037397 4704 scope.go:117] "RemoveContainer" containerID="bf55c0ce75b66f26f88fb2a825fb47999f6e655610a2300b0b4d1a8ff2e8769f" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.085383 4704 generic.go:334] "Generic (PLEG): container finished" podID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerID="0f62634f392898a9742e216b1bd311a78130f960a3c6638fc5711f34d33a9682" exitCode=0 Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.085486 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"b2b0b39b-1e4c-4668-a833-1d54167690d7","Type":"ContainerDied","Data":"0f62634f392898a9742e216b1bd311a78130f960a3c6638fc5711f34d33a9682"} Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.085867 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.085919 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.088419 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.088459 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"41a036fd-d350-49ff-8d77-3ee76652a92f","Type":"ContainerStarted","Data":"01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80"} Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.088476 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"41a036fd-d350-49ff-8d77-3ee76652a92f","Type":"ContainerStarted","Data":"d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39"} Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.108887 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.123932 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.134152 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.136537 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.136871 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.137046 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.143687 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.148317 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.148300359 podStartE2EDuration="2.148300359s" podCreationTimestamp="2026-01-22 17:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:19.132642815 +0000 UTC m=+1971.777189515" watchObservedRunningTime="2026-01-22 17:01:19.148300359 +0000 UTC m=+1971.792847049" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.170675 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-scripts\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.170731 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.170753 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qvx\" (UniqueName: \"kubernetes.io/projected/d81d18f1-fedb-4edb-9713-7ff9024ba03d-kube-api-access-78qvx\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.170904 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.171021 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-config-data\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.171100 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.171246 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-run-httpd\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.171268 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-log-httpd\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.173066 4704 scope.go:117] "RemoveContainer" containerID="1575cfdfd8c36defc6b08cdb3a5e7ee4f2bb9f4c6c2241af64706efb3b0f6112" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.219873 4704 scope.go:117] "RemoveContainer" containerID="c94d9849c3ce2e7d0f909583a170a3fcf0a99662febc2b2fb44beb15f503125a" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272396 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-config-data\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272463 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272520 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-run-httpd\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272536 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-log-httpd\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272564 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-scripts\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272612 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272639 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78qvx\" (UniqueName: \"kubernetes.io/projected/d81d18f1-fedb-4edb-9713-7ff9024ba03d-kube-api-access-78qvx\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.272673 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.278062 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.278215 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-log-httpd\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.278298 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-run-httpd\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.285229 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-config-data\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.285660 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.294167 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.298065 4704 scope.go:117] "RemoveContainer" containerID="85ce20a5f0a0c8aa1b6a12a678f33ec9de874c06ac1a8b7c5050afd74a40eea8" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.299061 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78qvx\" (UniqueName: \"kubernetes.io/projected/d81d18f1-fedb-4edb-9713-7ff9024ba03d-kube-api-access-78qvx\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.313818 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-scripts\") pod \"ceilometer-0\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.475228 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.483482 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.577838 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-combined-ca-bundle\") pod \"b2b0b39b-1e4c-4668-a833-1d54167690d7\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.577874 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b0b39b-1e4c-4668-a833-1d54167690d7-etc-machine-id\") pod \"b2b0b39b-1e4c-4668-a833-1d54167690d7\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.577901 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data\") pod \"b2b0b39b-1e4c-4668-a833-1d54167690d7\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.577948 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data-custom\") pod \"b2b0b39b-1e4c-4668-a833-1d54167690d7\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.578024 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-scripts\") pod \"b2b0b39b-1e4c-4668-a833-1d54167690d7\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.578085 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcd4l\" (UniqueName: \"kubernetes.io/projected/b2b0b39b-1e4c-4668-a833-1d54167690d7-kube-api-access-qcd4l\") pod \"b2b0b39b-1e4c-4668-a833-1d54167690d7\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.578163 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-cert-memcached-mtls\") pod \"b2b0b39b-1e4c-4668-a833-1d54167690d7\" (UID: \"b2b0b39b-1e4c-4668-a833-1d54167690d7\") " Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.579452 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2b0b39b-1e4c-4668-a833-1d54167690d7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b2b0b39b-1e4c-4668-a833-1d54167690d7" (UID: "b2b0b39b-1e4c-4668-a833-1d54167690d7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.587988 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2b0b39b-1e4c-4668-a833-1d54167690d7-kube-api-access-qcd4l" (OuterVolumeSpecName: "kube-api-access-qcd4l") pod "b2b0b39b-1e4c-4668-a833-1d54167690d7" (UID: "b2b0b39b-1e4c-4668-a833-1d54167690d7"). InnerVolumeSpecName "kube-api-access-qcd4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.612876 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b2b0b39b-1e4c-4668-a833-1d54167690d7" (UID: "b2b0b39b-1e4c-4668-a833-1d54167690d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.612971 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-scripts" (OuterVolumeSpecName: "scripts") pod "b2b0b39b-1e4c-4668-a833-1d54167690d7" (UID: "b2b0b39b-1e4c-4668-a833-1d54167690d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.675019 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544df65b-383c-41da-94b8-914c47c3e146" path="/var/lib/kubelet/pods/544df65b-383c-41da-94b8-914c47c3e146/volumes" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.675894 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653f8f63-8758-4a25-a51b-20169bfbce50" path="/var/lib/kubelet/pods/653f8f63-8758-4a25-a51b-20169bfbce50/volumes" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.679882 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.679908 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcd4l\" (UniqueName: \"kubernetes.io/projected/b2b0b39b-1e4c-4668-a833-1d54167690d7-kube-api-access-qcd4l\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.679921 4704 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b0b39b-1e4c-4668-a833-1d54167690d7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.679929 4704 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.680278 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:19 crc kubenswrapper[4704]: W0122 17:01:19.695447 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcec7940e_ed78_428b_ac74_1b515c5e0b71.slice/crio-7a0f66c8517323c65520dd4ac530e58cd6211c59ecdd70f0afc3691514459798 WatchSource:0}: Error finding container 7a0f66c8517323c65520dd4ac530e58cd6211c59ecdd70f0afc3691514459798: Status 404 returned error can't find the container with id 7a0f66c8517323c65520dd4ac530e58cd6211c59ecdd70f0afc3691514459798 Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.720641 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2b0b39b-1e4c-4668-a833-1d54167690d7" (UID: "b2b0b39b-1e4c-4668-a833-1d54167690d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.783854 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.833713 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data" (OuterVolumeSpecName: "config-data") pod "b2b0b39b-1e4c-4668-a833-1d54167690d7" (UID: "b2b0b39b-1e4c-4668-a833-1d54167690d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.842926 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b2b0b39b-1e4c-4668-a833-1d54167690d7" (UID: "b2b0b39b-1e4c-4668-a833-1d54167690d7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.885119 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:19 crc kubenswrapper[4704]: I0122 17:01:19.885155 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b0b39b-1e4c-4668-a833-1d54167690d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.004268 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:20 crc kubenswrapper[4704]: W0122 17:01:20.006921 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd81d18f1_fedb_4edb_9713_7ff9024ba03d.slice/crio-2deae7d53ad085fc521d4a05865ced553d889b8c7069589b4174dc4989154ec9 WatchSource:0}: Error finding container 2deae7d53ad085fc521d4a05865ced553d889b8c7069589b4174dc4989154ec9: Status 404 returned error can't find the container with id 2deae7d53ad085fc521d4a05865ced553d889b8c7069589b4174dc4989154ec9 Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.123373 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"b2b0b39b-1e4c-4668-a833-1d54167690d7","Type":"ContainerDied","Data":"0b91cc86380d6205c296255b9800ba5b95be94021e84e28ed57ca16934706f5f"} Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.123413 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.123417 4704 scope.go:117] "RemoveContainer" containerID="ea4a958e7468a60615372cfb2e1c7458c3fec8f694854f94a69ae465f4d8afe8" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.124837 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cec7940e-ed78-428b-ac74-1b515c5e0b71","Type":"ContainerStarted","Data":"2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b"} Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.124875 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cec7940e-ed78-428b-ac74-1b515c5e0b71","Type":"ContainerStarted","Data":"7a0f66c8517323c65520dd4ac530e58cd6211c59ecdd70f0afc3691514459798"} Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.126887 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerStarted","Data":"2deae7d53ad085fc521d4a05865ced553d889b8c7069589b4174dc4989154ec9"} Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.143379 4704 scope.go:117] "RemoveContainer" containerID="0f62634f392898a9742e216b1bd311a78130f960a3c6638fc5711f34d33a9682" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.159229 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.159212095 podStartE2EDuration="2.159212095s" podCreationTimestamp="2026-01-22 17:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:20.144193419 +0000 UTC m=+1972.788740119" watchObservedRunningTime="2026-01-22 17:01:20.159212095 +0000 UTC m=+1972.803758795" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.176812 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.200440 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.210502 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:20 crc kubenswrapper[4704]: E0122 17:01:20.210892 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="probe" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.210910 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="probe" Jan 22 17:01:20 crc kubenswrapper[4704]: E0122 17:01:20.210934 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="cinder-scheduler" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.210940 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="cinder-scheduler" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.211126 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="cinder-scheduler" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.211151 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" containerName="probe" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.211995 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.214386 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.227276 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.295716 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.295782 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.295845 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.295989 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tsfh\" (UniqueName: \"kubernetes.io/projected/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-kube-api-access-2tsfh\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.296052 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.296091 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.296174 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.398955 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.399030 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.399058 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tsfh\" (UniqueName: \"kubernetes.io/projected/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-kube-api-access-2tsfh\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.399085 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.399571 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.400144 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.400530 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.400661 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.405607 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.407609 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.425682 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-scripts\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.426645 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.429918 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tsfh\" (UniqueName: \"kubernetes.io/projected/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-kube-api-access-2tsfh\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.430389 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.527803 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:20 crc kubenswrapper[4704]: I0122 17:01:20.987008 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:21 crc kubenswrapper[4704]: I0122 17:01:21.136740 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:21 crc kubenswrapper[4704]: I0122 17:01:21.140573 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerStarted","Data":"9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719"} Jan 22 17:01:21 crc kubenswrapper[4704]: I0122 17:01:21.160540 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"e8f1ee52-df88-4106-b7ef-ed0bb39739ba","Type":"ContainerStarted","Data":"cf0b12b9102d7f34c044ef238184876e909b7cd6706fb26d9f003a6a88440b77"} Jan 22 17:01:21 crc kubenswrapper[4704]: I0122 17:01:21.672339 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2b0b39b-1e4c-4668-a833-1d54167690d7" path="/var/lib/kubelet/pods/b2b0b39b-1e4c-4668-a833-1d54167690d7/volumes" Jan 22 17:01:22 crc kubenswrapper[4704]: I0122 17:01:22.035235 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:22 crc kubenswrapper[4704]: I0122 17:01:22.171105 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"e8f1ee52-df88-4106-b7ef-ed0bb39739ba","Type":"ContainerStarted","Data":"1dcf4d7fc77235e6388c275522d2987dcfdf4a4f6b93a6509c3878fc14f726db"} Jan 22 17:01:22 crc kubenswrapper[4704]: I0122 17:01:22.173002 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerStarted","Data":"a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9"} Jan 22 17:01:22 crc kubenswrapper[4704]: I0122 17:01:22.314097 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:22 crc kubenswrapper[4704]: I0122 17:01:22.389716 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:23 crc kubenswrapper[4704]: I0122 17:01:23.184175 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"e8f1ee52-df88-4106-b7ef-ed0bb39739ba","Type":"ContainerStarted","Data":"c62f552295ff84b1a41c7ae83930fb9227a910074c312c6a160eeea937d976ab"} Jan 22 17:01:23 crc kubenswrapper[4704]: I0122 17:01:23.187500 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerStarted","Data":"94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8"} Jan 22 17:01:23 crc kubenswrapper[4704]: I0122 17:01:23.483457 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:24 crc kubenswrapper[4704]: I0122 17:01:24.200713 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerStarted","Data":"66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f"} Jan 22 17:01:24 crc kubenswrapper[4704]: I0122 17:01:24.232619 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=4.232578189 podStartE2EDuration="4.232578189s" podCreationTimestamp="2026-01-22 17:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:23.204218449 +0000 UTC m=+1975.848765149" watchObservedRunningTime="2026-01-22 17:01:24.232578189 +0000 UTC m=+1976.877124889" Jan 22 17:01:24 crc kubenswrapper[4704]: I0122 17:01:24.237169 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.505887438 podStartE2EDuration="5.237130668s" podCreationTimestamp="2026-01-22 17:01:19 +0000 UTC" firstStartedPulling="2026-01-22 17:01:20.015634436 +0000 UTC m=+1972.660181136" lastFinishedPulling="2026-01-22 17:01:23.746877656 +0000 UTC m=+1976.391424366" observedRunningTime="2026-01-22 17:01:24.232212599 +0000 UTC m=+1976.876759339" watchObservedRunningTime="2026-01-22 17:01:24.237130668 +0000 UTC m=+1976.881677408" Jan 22 17:01:24 crc kubenswrapper[4704]: I0122 17:01:24.704578 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:25 crc kubenswrapper[4704]: I0122 17:01:25.208339 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:25 crc kubenswrapper[4704]: I0122 17:01:25.529523 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:25 crc kubenswrapper[4704]: I0122 17:01:25.914125 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:27 crc kubenswrapper[4704]: I0122 17:01:27.107659 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:27 crc kubenswrapper[4704]: I0122 17:01:27.659637 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:28 crc kubenswrapper[4704]: I0122 17:01:28.323360 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:28 crc kubenswrapper[4704]: I0122 17:01:28.950155 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:28 crc kubenswrapper[4704]: I0122 17:01:28.978282 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:29 crc kubenswrapper[4704]: I0122 17:01:29.241478 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:29 crc kubenswrapper[4704]: I0122 17:01:29.269467 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:29 crc kubenswrapper[4704]: I0122 17:01:29.515350 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:30 crc kubenswrapper[4704]: I0122 17:01:30.724502 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:30 crc kubenswrapper[4704]: I0122 17:01:30.738256 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:30 crc kubenswrapper[4704]: I0122 17:01:30.966428 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.040055 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-42hqh"] Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.060576 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-42hqh"] Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.077548 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.078092 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="cinder-backup" containerID="cri-o://d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39" gracePeriod=30 Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.078343 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="probe" containerID="cri-o://01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80" gracePeriod=30 Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.086636 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.131140 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.132092 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api-log" containerID="cri-o://990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981" gracePeriod=30 Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.132293 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api" containerID="cri-o://403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19" gracePeriod=30 Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.154180 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder0423-account-delete-jkmhj"] Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.155243 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.167142 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder0423-account-delete-jkmhj"] Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.207562 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2624279d-7077-4056-92b8-818707451a5b-operator-scripts\") pod \"cinder0423-account-delete-jkmhj\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.207600 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5d2s\" (UniqueName: \"kubernetes.io/projected/2624279d-7077-4056-92b8-818707451a5b-kube-api-access-t5d2s\") pod \"cinder0423-account-delete-jkmhj\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.274831 4704 generic.go:334] "Generic (PLEG): container finished" podID="39baf79a-d188-48e5-ba61-addf254f1257" containerID="990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981" exitCode=143 Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.274897 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"39baf79a-d188-48e5-ba61-addf254f1257","Type":"ContainerDied","Data":"990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981"} Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.275106 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="cinder-scheduler" containerID="cri-o://1dcf4d7fc77235e6388c275522d2987dcfdf4a4f6b93a6509c3878fc14f726db" gracePeriod=30 Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.275227 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="probe" containerID="cri-o://c62f552295ff84b1a41c7ae83930fb9227a910074c312c6a160eeea937d976ab" gracePeriod=30 Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.309045 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2624279d-7077-4056-92b8-818707451a5b-operator-scripts\") pod \"cinder0423-account-delete-jkmhj\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.309087 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5d2s\" (UniqueName: \"kubernetes.io/projected/2624279d-7077-4056-92b8-818707451a5b-kube-api-access-t5d2s\") pod \"cinder0423-account-delete-jkmhj\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.310145 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2624279d-7077-4056-92b8-818707451a5b-operator-scripts\") pod \"cinder0423-account-delete-jkmhj\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.330174 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5d2s\" (UniqueName: \"kubernetes.io/projected/2624279d-7077-4056-92b8-818707451a5b-kube-api-access-t5d2s\") pod \"cinder0423-account-delete-jkmhj\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.485024 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.649965 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b97177-c5dd-4e1c-bc12-a24678377554" path="/var/lib/kubelet/pods/30b97177-c5dd-4e1c-bc12-a24678377554/volumes" Jan 22 17:01:31 crc kubenswrapper[4704]: I0122 17:01:31.955437 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder0423-account-delete-jkmhj"] Jan 22 17:01:32 crc kubenswrapper[4704]: I0122 17:01:32.158501 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:32 crc kubenswrapper[4704]: I0122 17:01:32.286121 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" event={"ID":"2624279d-7077-4056-92b8-818707451a5b","Type":"ContainerStarted","Data":"6c1218064fc4093e0762edae03c8db451a9f1be5979771079586d32f6dc20fad"} Jan 22 17:01:32 crc kubenswrapper[4704]: I0122 17:01:32.286173 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" event={"ID":"2624279d-7077-4056-92b8-818707451a5b","Type":"ContainerStarted","Data":"203cbba87889c4b97026bafbba735e002d877862c8eb6abb11adfb3b770e5221"} Jan 22 17:01:32 crc kubenswrapper[4704]: I0122 17:01:32.287437 4704 generic.go:334] "Generic (PLEG): container finished" podID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerID="01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80" exitCode=0 Jan 22 17:01:32 crc kubenswrapper[4704]: I0122 17:01:32.287481 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"41a036fd-d350-49ff-8d77-3ee76652a92f","Type":"ContainerDied","Data":"01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80"} Jan 22 17:01:32 crc kubenswrapper[4704]: I0122 17:01:32.307594 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" podStartSLOduration=1.307575426 podStartE2EDuration="1.307575426s" podCreationTimestamp="2026-01-22 17:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:32.300012911 +0000 UTC m=+1984.944559631" watchObservedRunningTime="2026-01-22 17:01:32.307575426 +0000 UTC m=+1984.952122126" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.184022 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.184468 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.184710 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="cec7940e-ed78-428b-ac74-1b515c5e0b71" containerName="watcher-decision-engine" containerID="cri-o://2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b" gracePeriod=30 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.296551 4704 generic.go:334] "Generic (PLEG): container finished" podID="2624279d-7077-4056-92b8-818707451a5b" containerID="6c1218064fc4093e0762edae03c8db451a9f1be5979771079586d32f6dc20fad" exitCode=0 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.296608 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" event={"ID":"2624279d-7077-4056-92b8-818707451a5b","Type":"ContainerDied","Data":"6c1218064fc4093e0762edae03c8db451a9f1be5979771079586d32f6dc20fad"} Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.299249 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerID="c62f552295ff84b1a41c7ae83930fb9227a910074c312c6a160eeea937d976ab" exitCode=0 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.299271 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerID="1dcf4d7fc77235e6388c275522d2987dcfdf4a4f6b93a6509c3878fc14f726db" exitCode=0 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.299303 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"e8f1ee52-df88-4106-b7ef-ed0bb39739ba","Type":"ContainerDied","Data":"c62f552295ff84b1a41c7ae83930fb9227a910074c312c6a160eeea937d976ab"} Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.299324 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"e8f1ee52-df88-4106-b7ef-ed0bb39739ba","Type":"ContainerDied","Data":"1dcf4d7fc77235e6388c275522d2987dcfdf4a4f6b93a6509c3878fc14f726db"} Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.301405 4704 generic.go:334] "Generic (PLEG): container finished" podID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerID="d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39" exitCode=0 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.301438 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"41a036fd-d350-49ff-8d77-3ee76652a92f","Type":"ContainerDied","Data":"d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39"} Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.301461 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"41a036fd-d350-49ff-8d77-3ee76652a92f","Type":"ContainerDied","Data":"292628209c336b6e68d8239670dfd8a441db8ca15a347ce1ca8463a05a5e1d2d"} Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.301480 4704 scope.go:117] "RemoveContainer" containerID="01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.301626 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.342396 4704 scope.go:117] "RemoveContainer" containerID="d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.346748 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-sys\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.346833 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.346878 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-cert-memcached-mtls\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.346905 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-lib-modules\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.346928 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-combined-ca-bundle\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.346956 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-dev\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.346981 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-brick\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347074 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-machine-id\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347128 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-iscsi\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347162 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxx2b\" (UniqueName: \"kubernetes.io/projected/41a036fd-d350-49ff-8d77-3ee76652a92f-kube-api-access-cxx2b\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347262 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-cinder\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347304 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-scripts\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347323 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-run\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347351 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-nvme\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347371 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-lib-cinder\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347406 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data-custom\") pod \"41a036fd-d350-49ff-8d77-3ee76652a92f\" (UID: \"41a036fd-d350-49ff-8d77-3ee76652a92f\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347778 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347821 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347810 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347870 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-sys" (OuterVolumeSpecName: "sys") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347872 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347782 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.347930 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-run" (OuterVolumeSpecName: "run") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348123 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348372 4704 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348393 4704 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348406 4704 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-run\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348414 4704 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348427 4704 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348436 4704 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-sys\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348444 4704 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348474 4704 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348514 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-dev" (OuterVolumeSpecName: "dev") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.348536 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.353422 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a036fd-d350-49ff-8d77-3ee76652a92f-kube-api-access-cxx2b" (OuterVolumeSpecName: "kube-api-access-cxx2b") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "kube-api-access-cxx2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.356152 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.356572 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.361899 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-scripts" (OuterVolumeSpecName: "scripts") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.373367 4704 scope.go:117] "RemoveContainer" containerID="01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80" Jan 22 17:01:33 crc kubenswrapper[4704]: E0122 17:01:33.373817 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80\": container with ID starting with 01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80 not found: ID does not exist" containerID="01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.373843 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80"} err="failed to get container status \"01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80\": rpc error: code = NotFound desc = could not find container \"01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80\": container with ID starting with 01602b4cf71017f028763c9f8c5be7cdbb5177f6171c039891769de5c4d7ea80 not found: ID does not exist" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.373862 4704 scope.go:117] "RemoveContainer" containerID="d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39" Jan 22 17:01:33 crc kubenswrapper[4704]: E0122 17:01:33.374308 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39\": container with ID starting with d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39 not found: ID does not exist" containerID="d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.374334 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39"} err="failed to get container status \"d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39\": rpc error: code = NotFound desc = could not find container \"d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39\": container with ID starting with d077f85640c59fbbe6cd141dc7abb71039cfd45db8f472eb89d5476851b74c39 not found: ID does not exist" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.392312 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.428655 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data" (OuterVolumeSpecName: "config-data") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.451534 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxx2b\" (UniqueName: \"kubernetes.io/projected/41a036fd-d350-49ff-8d77-3ee76652a92f-kube-api-access-cxx2b\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.451656 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.451714 4704 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.451768 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.451844 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.451900 4704 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-dev\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.451959 4704 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/41a036fd-d350-49ff-8d77-3ee76652a92f-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.482058 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "41a036fd-d350-49ff-8d77-3ee76652a92f" (UID: "41a036fd-d350-49ff-8d77-3ee76652a92f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.553603 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/41a036fd-d350-49ff-8d77-3ee76652a92f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.665020 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.666756 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.740018 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.866685 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-etc-machine-id\") pod \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.866787 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-scripts\") pod \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.866843 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-combined-ca-bundle\") pod \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.866860 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e8f1ee52-df88-4106-b7ef-ed0bb39739ba" (UID: "e8f1ee52-df88-4106-b7ef-ed0bb39739ba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.866937 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data\") pod \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.867091 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data-custom\") pod \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.867167 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-cert-memcached-mtls\") pod \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.867675 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tsfh\" (UniqueName: \"kubernetes.io/projected/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-kube-api-access-2tsfh\") pod \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\" (UID: \"e8f1ee52-df88-4106-b7ef-ed0bb39739ba\") " Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.868286 4704 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.870724 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e8f1ee52-df88-4106-b7ef-ed0bb39739ba" (UID: "e8f1ee52-df88-4106-b7ef-ed0bb39739ba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.870778 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-scripts" (OuterVolumeSpecName: "scripts") pod "e8f1ee52-df88-4106-b7ef-ed0bb39739ba" (UID: "e8f1ee52-df88-4106-b7ef-ed0bb39739ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.871743 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-kube-api-access-2tsfh" (OuterVolumeSpecName: "kube-api-access-2tsfh") pod "e8f1ee52-df88-4106-b7ef-ed0bb39739ba" (UID: "e8f1ee52-df88-4106-b7ef-ed0bb39739ba"). InnerVolumeSpecName "kube-api-access-2tsfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.923499 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.923904 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-central-agent" containerID="cri-o://9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719" gracePeriod=30 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.925169 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="proxy-httpd" containerID="cri-o://66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f" gracePeriod=30 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.925311 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="sg-core" containerID="cri-o://94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8" gracePeriod=30 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.925361 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-notification-agent" containerID="cri-o://a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9" gracePeriod=30 Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.933489 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.197:3000/\": EOF" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.965176 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8f1ee52-df88-4106-b7ef-ed0bb39739ba" (UID: "e8f1ee52-df88-4106-b7ef-ed0bb39739ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.969545 4704 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.969582 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tsfh\" (UniqueName: \"kubernetes.io/projected/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-kube-api-access-2tsfh\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.969596 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:33 crc kubenswrapper[4704]: I0122 17:01:33.969607 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.054288 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data" (OuterVolumeSpecName: "config-data") pod "e8f1ee52-df88-4106-b7ef-ed0bb39739ba" (UID: "e8f1ee52-df88-4106-b7ef-ed0bb39739ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.071242 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.081871 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "e8f1ee52-df88-4106-b7ef-ed0bb39739ba" (UID: "e8f1ee52-df88-4106-b7ef-ed0bb39739ba"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.172970 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e8f1ee52-df88-4106-b7ef-ed0bb39739ba-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.316959 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.316960 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"e8f1ee52-df88-4106-b7ef-ed0bb39739ba","Type":"ContainerDied","Data":"cf0b12b9102d7f34c044ef238184876e909b7cd6706fb26d9f003a6a88440b77"} Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.317466 4704 scope.go:117] "RemoveContainer" containerID="c62f552295ff84b1a41c7ae83930fb9227a910074c312c6a160eeea937d976ab" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.322048 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/cinder-api-0" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.194:8776/healthcheck\": dial tcp 10.217.0.194:8776: connect: connection refused" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.327104 4704 generic.go:334] "Generic (PLEG): container finished" podID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerID="66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f" exitCode=0 Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.327141 4704 generic.go:334] "Generic (PLEG): container finished" podID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerID="94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8" exitCode=2 Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.327316 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerDied","Data":"66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f"} Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.327346 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerDied","Data":"94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8"} Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.365301 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.372056 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.468788 4704 scope.go:117] "RemoveContainer" containerID="1dcf4d7fc77235e6388c275522d2987dcfdf4a4f6b93a6509c3878fc14f726db" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.540144 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.724604 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.832611 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.885336 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5d2s\" (UniqueName: \"kubernetes.io/projected/2624279d-7077-4056-92b8-818707451a5b-kube-api-access-t5d2s\") pod \"2624279d-7077-4056-92b8-818707451a5b\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.885444 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2624279d-7077-4056-92b8-818707451a5b-operator-scripts\") pod \"2624279d-7077-4056-92b8-818707451a5b\" (UID: \"2624279d-7077-4056-92b8-818707451a5b\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.886637 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2624279d-7077-4056-92b8-818707451a5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2624279d-7077-4056-92b8-818707451a5b" (UID: "2624279d-7077-4056-92b8-818707451a5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.890076 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2624279d-7077-4056-92b8-818707451a5b-kube-api-access-t5d2s" (OuterVolumeSpecName: "kube-api-access-t5d2s") pod "2624279d-7077-4056-92b8-818707451a5b" (UID: "2624279d-7077-4056-92b8-818707451a5b"). InnerVolumeSpecName "kube-api-access-t5d2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.987307 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hsxt\" (UniqueName: \"kubernetes.io/projected/39baf79a-d188-48e5-ba61-addf254f1257-kube-api-access-2hsxt\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.987704 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-cert-memcached-mtls\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.987742 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-combined-ca-bundle\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.987825 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-scripts\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.987896 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-internal-tls-certs\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.987943 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.987988 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-public-tls-certs\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.988039 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39baf79a-d188-48e5-ba61-addf254f1257-logs\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.988105 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data-custom\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.988128 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39baf79a-d188-48e5-ba61-addf254f1257-etc-machine-id\") pod \"39baf79a-d188-48e5-ba61-addf254f1257\" (UID: \"39baf79a-d188-48e5-ba61-addf254f1257\") " Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.988526 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5d2s\" (UniqueName: \"kubernetes.io/projected/2624279d-7077-4056-92b8-818707451a5b-kube-api-access-t5d2s\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.988549 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2624279d-7077-4056-92b8-818707451a5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.988616 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39baf79a-d188-48e5-ba61-addf254f1257-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.989858 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39baf79a-d188-48e5-ba61-addf254f1257-logs" (OuterVolumeSpecName: "logs") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.992625 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39baf79a-d188-48e5-ba61-addf254f1257-kube-api-access-2hsxt" (OuterVolumeSpecName: "kube-api-access-2hsxt") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "kube-api-access-2hsxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.992753 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-scripts" (OuterVolumeSpecName: "scripts") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:34 crc kubenswrapper[4704]: I0122 17:01:34.992905 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.015629 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.040481 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data" (OuterVolumeSpecName: "config-data") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.045003 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.051408 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.055607 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "39baf79a-d188-48e5-ba61-addf254f1257" (UID: "39baf79a-d188-48e5-ba61-addf254f1257"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.089907 4704 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.089947 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.089956 4704 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.089965 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39baf79a-d188-48e5-ba61-addf254f1257-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.089974 4704 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39baf79a-d188-48e5-ba61-addf254f1257-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.089982 4704 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.089991 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hsxt\" (UniqueName: \"kubernetes.io/projected/39baf79a-d188-48e5-ba61-addf254f1257-kube-api-access-2hsxt\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.090003 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.090012 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.090020 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39baf79a-d188-48e5-ba61-addf254f1257-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.339314 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.339311 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder0423-account-delete-jkmhj" event={"ID":"2624279d-7077-4056-92b8-818707451a5b","Type":"ContainerDied","Data":"203cbba87889c4b97026bafbba735e002d877862c8eb6abb11adfb3b770e5221"} Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.339982 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203cbba87889c4b97026bafbba735e002d877862c8eb6abb11adfb3b770e5221" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.341213 4704 generic.go:334] "Generic (PLEG): container finished" podID="39baf79a-d188-48e5-ba61-addf254f1257" containerID="403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19" exitCode=0 Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.341261 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"39baf79a-d188-48e5-ba61-addf254f1257","Type":"ContainerDied","Data":"403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19"} Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.341284 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"39baf79a-d188-48e5-ba61-addf254f1257","Type":"ContainerDied","Data":"5eb01e8e9752ff1d9568d3bc72d829f7ddc33af8f21dbe3d80dbbeb37988a302"} Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.341306 4704 scope.go:117] "RemoveContainer" containerID="403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.343448 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.358996 4704 generic.go:334] "Generic (PLEG): container finished" podID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerID="9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719" exitCode=0 Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.359040 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerDied","Data":"9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719"} Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.448539 4704 scope.go:117] "RemoveContainer" containerID="990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.453639 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.460387 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.466123 4704 scope.go:117] "RemoveContainer" containerID="403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19" Jan 22 17:01:35 crc kubenswrapper[4704]: E0122 17:01:35.466557 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19\": container with ID starting with 403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19 not found: ID does not exist" containerID="403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.466601 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19"} err="failed to get container status \"403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19\": rpc error: code = NotFound desc = could not find container \"403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19\": container with ID starting with 403511d8eb8f0f1f5408c8c7a39495a1c6e305e2c7a125baaed9c70ff1759e19 not found: ID does not exist" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.466634 4704 scope.go:117] "RemoveContainer" containerID="990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981" Jan 22 17:01:35 crc kubenswrapper[4704]: E0122 17:01:35.466981 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981\": container with ID starting with 990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981 not found: ID does not exist" containerID="990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.467009 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981"} err="failed to get container status \"990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981\": rpc error: code = NotFound desc = could not find container \"990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981\": container with ID starting with 990b18c8bc5f7bb6cb71ec51c6b90cb9ab1b8b9677438b42a72c916c367e2981 not found: ID does not exist" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.654532 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39baf79a-d188-48e5-ba61-addf254f1257" path="/var/lib/kubelet/pods/39baf79a-d188-48e5-ba61-addf254f1257/volumes" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.655287 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" path="/var/lib/kubelet/pods/41a036fd-d350-49ff-8d77-3ee76652a92f/volumes" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.655979 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" path="/var/lib/kubelet/pods/e8f1ee52-df88-4106-b7ef-ed0bb39739ba/volumes" Jan 22 17:01:35 crc kubenswrapper[4704]: I0122 17:01:35.791140 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:36 crc kubenswrapper[4704]: I0122 17:01:36.182381 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-create-l9d46"] Jan 22 17:01:36 crc kubenswrapper[4704]: I0122 17:01:36.190964 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-create-l9d46"] Jan 22 17:01:36 crc kubenswrapper[4704]: I0122 17:01:36.208485 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder0423-account-delete-jkmhj"] Jan 22 17:01:36 crc kubenswrapper[4704]: I0122 17:01:36.217923 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder0423-account-delete-jkmhj"] Jan 22 17:01:36 crc kubenswrapper[4704]: I0122 17:01:36.225899 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-0423-account-create-update-dwp5g"] Jan 22 17:01:36 crc kubenswrapper[4704]: I0122 17:01:36.240965 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-0423-account-create-update-dwp5g"] Jan 22 17:01:36 crc kubenswrapper[4704]: I0122 17:01:36.981001 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:37 crc kubenswrapper[4704]: I0122 17:01:37.658366 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2624279d-7077-4056-92b8-818707451a5b" path="/var/lib/kubelet/pods/2624279d-7077-4056-92b8-818707451a5b/volumes" Jan 22 17:01:37 crc kubenswrapper[4704]: I0122 17:01:37.659139 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a4c89f-490a-477e-a824-d415cd7e8d3b" path="/var/lib/kubelet/pods/72a4c89f-490a-477e-a824-d415cd7e8d3b/volumes" Jan 22 17:01:37 crc kubenswrapper[4704]: I0122 17:01:37.659701 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d010716c-c2ec-4f59-9c18-19b48ec26d8f" path="/var/lib/kubelet/pods/d010716c-c2ec-4f59-9c18-19b48ec26d8f/volumes" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.014554 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.106436 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.152607 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-config-data\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.152681 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-run-httpd\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.152778 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-ceilometer-tls-certs\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.152858 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-scripts\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.152911 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-sg-core-conf-yaml\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.152943 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-log-httpd\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.153030 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78qvx\" (UniqueName: \"kubernetes.io/projected/d81d18f1-fedb-4edb-9713-7ff9024ba03d-kube-api-access-78qvx\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.153057 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-combined-ca-bundle\") pod \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\" (UID: \"d81d18f1-fedb-4edb-9713-7ff9024ba03d\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.153078 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.153391 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.153386 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.159250 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-scripts" (OuterVolumeSpecName: "scripts") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.159326 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81d18f1-fedb-4edb-9713-7ff9024ba03d-kube-api-access-78qvx" (OuterVolumeSpecName: "kube-api-access-78qvx") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "kube-api-access-78qvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.183092 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_cec7940e-ed78-428b-ac74-1b515c5e0b71/watcher-decision-engine/0.log" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.185117 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.213635 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.215102 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.240886 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-config-data" (OuterVolumeSpecName: "config-data") pod "d81d18f1-fedb-4edb-9713-7ff9024ba03d" (UID: "d81d18f1-fedb-4edb-9713-7ff9024ba03d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.254450 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-config-data\") pod \"cec7940e-ed78-428b-ac74-1b515c5e0b71\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.254626 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-custom-prometheus-ca\") pod \"cec7940e-ed78-428b-ac74-1b515c5e0b71\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.254667 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-combined-ca-bundle\") pod \"cec7940e-ed78-428b-ac74-1b515c5e0b71\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.254698 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7940e-ed78-428b-ac74-1b515c5e0b71-logs\") pod \"cec7940e-ed78-428b-ac74-1b515c5e0b71\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.254725 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-cert-memcached-mtls\") pod \"cec7940e-ed78-428b-ac74-1b515c5e0b71\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.254751 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jpsv\" (UniqueName: \"kubernetes.io/projected/cec7940e-ed78-428b-ac74-1b515c5e0b71-kube-api-access-7jpsv\") pod \"cec7940e-ed78-428b-ac74-1b515c5e0b71\" (UID: \"cec7940e-ed78-428b-ac74-1b515c5e0b71\") " Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255050 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255065 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255074 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255082 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d81d18f1-fedb-4edb-9713-7ff9024ba03d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255090 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78qvx\" (UniqueName: \"kubernetes.io/projected/d81d18f1-fedb-4edb-9713-7ff9024ba03d-kube-api-access-78qvx\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255100 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255110 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d81d18f1-fedb-4edb-9713-7ff9024ba03d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.255620 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cec7940e-ed78-428b-ac74-1b515c5e0b71-logs" (OuterVolumeSpecName: "logs") pod "cec7940e-ed78-428b-ac74-1b515c5e0b71" (UID: "cec7940e-ed78-428b-ac74-1b515c5e0b71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.258399 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec7940e-ed78-428b-ac74-1b515c5e0b71-kube-api-access-7jpsv" (OuterVolumeSpecName: "kube-api-access-7jpsv") pod "cec7940e-ed78-428b-ac74-1b515c5e0b71" (UID: "cec7940e-ed78-428b-ac74-1b515c5e0b71"). InnerVolumeSpecName "kube-api-access-7jpsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.283253 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cec7940e-ed78-428b-ac74-1b515c5e0b71" (UID: "cec7940e-ed78-428b-ac74-1b515c5e0b71"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.291883 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cec7940e-ed78-428b-ac74-1b515c5e0b71" (UID: "cec7940e-ed78-428b-ac74-1b515c5e0b71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.297902 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-config-data" (OuterVolumeSpecName: "config-data") pod "cec7940e-ed78-428b-ac74-1b515c5e0b71" (UID: "cec7940e-ed78-428b-ac74-1b515c5e0b71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.329224 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "cec7940e-ed78-428b-ac74-1b515c5e0b71" (UID: "cec7940e-ed78-428b-ac74-1b515c5e0b71"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.356120 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.356158 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.356170 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7940e-ed78-428b-ac74-1b515c5e0b71-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.356179 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.356187 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jpsv\" (UniqueName: \"kubernetes.io/projected/cec7940e-ed78-428b-ac74-1b515c5e0b71-kube-api-access-7jpsv\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.356197 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7940e-ed78-428b-ac74-1b515c5e0b71-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.402518 4704 generic.go:334] "Generic (PLEG): container finished" podID="cec7940e-ed78-428b-ac74-1b515c5e0b71" containerID="2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b" exitCode=0 Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.402574 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cec7940e-ed78-428b-ac74-1b515c5e0b71","Type":"ContainerDied","Data":"2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b"} Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.402598 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cec7940e-ed78-428b-ac74-1b515c5e0b71","Type":"ContainerDied","Data":"7a0f66c8517323c65520dd4ac530e58cd6211c59ecdd70f0afc3691514459798"} Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.402615 4704 scope.go:117] "RemoveContainer" containerID="2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.402757 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.406621 4704 generic.go:334] "Generic (PLEG): container finished" podID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerID="a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9" exitCode=0 Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.406656 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerDied","Data":"a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9"} Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.406676 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d81d18f1-fedb-4edb-9713-7ff9024ba03d","Type":"ContainerDied","Data":"2deae7d53ad085fc521d4a05865ced553d889b8c7069589b4174dc4989154ec9"} Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.406732 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.432357 4704 scope.go:117] "RemoveContainer" containerID="2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.433073 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b\": container with ID starting with 2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b not found: ID does not exist" containerID="2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.433110 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b"} err="failed to get container status \"2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b\": rpc error: code = NotFound desc = could not find container \"2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b\": container with ID starting with 2ce4c9d4120d789d2bafd3ef0a8cd7d4e2fa1e8165b55f9e1ff0f0552cb0607b not found: ID does not exist" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.433130 4704 scope.go:117] "RemoveContainer" containerID="66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.436752 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.444554 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.453882 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.457854 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458296 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec7940e-ed78-428b-ac74-1b515c5e0b71" containerName="watcher-decision-engine" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458310 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec7940e-ed78-428b-ac74-1b515c5e0b71" containerName="watcher-decision-engine" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458326 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-central-agent" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458334 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-central-agent" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458351 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2624279d-7077-4056-92b8-818707451a5b" containerName="mariadb-account-delete" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458358 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2624279d-7077-4056-92b8-818707451a5b" containerName="mariadb-account-delete" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458370 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="cinder-backup" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458377 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="cinder-backup" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458389 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="sg-core" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458396 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="sg-core" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458411 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-notification-agent" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458418 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-notification-agent" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458432 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="proxy-httpd" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458438 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="proxy-httpd" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458451 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458458 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458470 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="cinder-scheduler" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458478 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="cinder-scheduler" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458490 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="probe" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458498 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="probe" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458518 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="probe" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458526 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="probe" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.458544 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api-log" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458551 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api-log" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458730 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="probe" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458742 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api-log" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458757 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="39baf79a-d188-48e5-ba61-addf254f1257" containerName="cinder-api" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458769 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2624279d-7077-4056-92b8-818707451a5b" containerName="mariadb-account-delete" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458812 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-notification-agent" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458820 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="probe" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458833 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="proxy-httpd" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458842 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="sg-core" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458857 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec7940e-ed78-428b-ac74-1b515c5e0b71" containerName="watcher-decision-engine" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458868 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f1ee52-df88-4106-b7ef-ed0bb39739ba" containerName="cinder-scheduler" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458880 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" containerName="ceilometer-central-agent" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.458891 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a036fd-d350-49ff-8d77-3ee76652a92f" containerName="cinder-backup" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.459553 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.460915 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.460969 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.460989 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.461020 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af5e06c7-250e-4c54-9adf-216fd10913ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.461046 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.461063 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcdw\" (UniqueName: \"kubernetes.io/projected/af5e06c7-250e-4c54-9adf-216fd10913ca-kube-api-access-qmcdw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.462212 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.465001 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.474137 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.477269 4704 scope.go:117] "RemoveContainer" containerID="94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.484726 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.487467 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.491437 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.491636 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.491851 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.518424 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.527433 4704 scope.go:117] "RemoveContainer" containerID="a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.544208 4704 scope.go:117] "RemoveContainer" containerID="9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.559898 4704 scope.go:117] "RemoveContainer" containerID="66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.560256 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f\": container with ID starting with 66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f not found: ID does not exist" containerID="66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.560295 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f"} err="failed to get container status \"66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f\": rpc error: code = NotFound desc = could not find container \"66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f\": container with ID starting with 66aa05423b4bf671ec1e2114c27b3ed60ac120186cec1c5d8ab296aa75696d1f not found: ID does not exist" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.560320 4704 scope.go:117] "RemoveContainer" containerID="94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.560579 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8\": container with ID starting with 94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8 not found: ID does not exist" containerID="94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.560596 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8"} err="failed to get container status \"94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8\": rpc error: code = NotFound desc = could not find container \"94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8\": container with ID starting with 94795e8defb9f32c6127daf4d4a2b31e8481c840e7e78e40ff06b5f4fa758db8 not found: ID does not exist" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.560610 4704 scope.go:117] "RemoveContainer" containerID="a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.561024 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9\": container with ID starting with a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9 not found: ID does not exist" containerID="a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.561051 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9"} err="failed to get container status \"a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9\": rpc error: code = NotFound desc = could not find container \"a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9\": container with ID starting with a681984a1f7ab69738738df436d01b8b18a725ae9f3565100b495c505bd889a9 not found: ID does not exist" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.561067 4704 scope.go:117] "RemoveContainer" containerID="9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719" Jan 22 17:01:38 crc kubenswrapper[4704]: E0122 17:01:38.561505 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719\": container with ID starting with 9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719 not found: ID does not exist" containerID="9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.561529 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719"} err="failed to get container status \"9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719\": rpc error: code = NotFound desc = could not find container \"9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719\": container with ID starting with 9f536b447129677ab986b414d63c8b694beb43964e9747f6f31b6cfc766f1719 not found: ID does not exist" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562021 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-log-httpd\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562052 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562082 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562310 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562371 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562402 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562428 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-scripts\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562447 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wpz\" (UniqueName: \"kubernetes.io/projected/8bcf7c3d-641c-4fb3-938b-3e840708623b-kube-api-access-n9wpz\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562466 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-run-httpd\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562489 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-config-data\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562514 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af5e06c7-250e-4c54-9adf-216fd10913ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562554 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562581 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmcdw\" (UniqueName: \"kubernetes.io/projected/af5e06c7-250e-4c54-9adf-216fd10913ca-kube-api-access-qmcdw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.562624 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.563102 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af5e06c7-250e-4c54-9adf-216fd10913ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.566599 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.566700 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.566735 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.567221 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.578029 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmcdw\" (UniqueName: \"kubernetes.io/projected/af5e06c7-250e-4c54-9adf-216fd10913ca-kube-api-access-qmcdw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.663737 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.663819 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-log-httpd\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.663852 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.663874 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.664433 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-log-httpd\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.665319 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-scripts\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.665370 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wpz\" (UniqueName: \"kubernetes.io/projected/8bcf7c3d-641c-4fb3-938b-3e840708623b-kube-api-access-n9wpz\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.665396 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-run-httpd\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.665432 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-config-data\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.665900 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-run-httpd\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.668670 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.669192 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-scripts\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.669298 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.675563 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-config-data\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.676568 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.690557 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wpz\" (UniqueName: \"kubernetes.io/projected/8bcf7c3d-641c-4fb3-938b-3e840708623b-kube-api-access-n9wpz\") pod \"ceilometer-0\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.785444 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:38 crc kubenswrapper[4704]: I0122 17:01:38.814027 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:39 crc kubenswrapper[4704]: I0122 17:01:39.266006 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:39 crc kubenswrapper[4704]: W0122 17:01:39.329012 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf5e06c7_250e_4c54_9adf_216fd10913ca.slice/crio-22a1385e59c4fe5e4bbd57128fa65386aa054bef98f46a3192d3e3873b6295a9 WatchSource:0}: Error finding container 22a1385e59c4fe5e4bbd57128fa65386aa054bef98f46a3192d3e3873b6295a9: Status 404 returned error can't find the container with id 22a1385e59c4fe5e4bbd57128fa65386aa054bef98f46a3192d3e3873b6295a9 Jan 22 17:01:39 crc kubenswrapper[4704]: I0122 17:01:39.333982 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:39 crc kubenswrapper[4704]: I0122 17:01:39.415579 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"af5e06c7-250e-4c54-9adf-216fd10913ca","Type":"ContainerStarted","Data":"22a1385e59c4fe5e4bbd57128fa65386aa054bef98f46a3192d3e3873b6295a9"} Jan 22 17:01:39 crc kubenswrapper[4704]: I0122 17:01:39.416374 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerStarted","Data":"6c9536d5eb5bbea46015662f6320abe0bf3b0a9619739e8a3379649ce7efd7c7"} Jan 22 17:01:39 crc kubenswrapper[4704]: I0122 17:01:39.645664 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec7940e-ed78-428b-ac74-1b515c5e0b71" path="/var/lib/kubelet/pods/cec7940e-ed78-428b-ac74-1b515c5e0b71/volumes" Jan 22 17:01:39 crc kubenswrapper[4704]: I0122 17:01:39.646875 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81d18f1-fedb-4edb-9713-7ff9024ba03d" path="/var/lib/kubelet/pods/d81d18f1-fedb-4edb-9713-7ff9024ba03d/volumes" Jan 22 17:01:40 crc kubenswrapper[4704]: I0122 17:01:40.424880 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerStarted","Data":"34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f"} Jan 22 17:01:40 crc kubenswrapper[4704]: I0122 17:01:40.427002 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"af5e06c7-250e-4c54-9adf-216fd10913ca","Type":"ContainerStarted","Data":"f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888"} Jan 22 17:01:40 crc kubenswrapper[4704]: I0122 17:01:40.453043 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.453019567 podStartE2EDuration="2.453019567s" podCreationTimestamp="2026-01-22 17:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:40.447272714 +0000 UTC m=+1993.091819454" watchObservedRunningTime="2026-01-22 17:01:40.453019567 +0000 UTC m=+1993.097566267" Jan 22 17:01:40 crc kubenswrapper[4704]: I0122 17:01:40.487528 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:41 crc kubenswrapper[4704]: I0122 17:01:41.436489 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerStarted","Data":"2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda"} Jan 22 17:01:41 crc kubenswrapper[4704]: I0122 17:01:41.437156 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerStarted","Data":"f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c"} Jan 22 17:01:41 crc kubenswrapper[4704]: I0122 17:01:41.659217 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:42 crc kubenswrapper[4704]: I0122 17:01:42.869440 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:43 crc kubenswrapper[4704]: I0122 17:01:43.454677 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerStarted","Data":"f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6"} Jan 22 17:01:43 crc kubenswrapper[4704]: I0122 17:01:43.455063 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:43 crc kubenswrapper[4704]: I0122 17:01:43.480054 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.232074136 podStartE2EDuration="5.480037782s" podCreationTimestamp="2026-01-22 17:01:38 +0000 UTC" firstStartedPulling="2026-01-22 17:01:39.273940987 +0000 UTC m=+1991.918487687" lastFinishedPulling="2026-01-22 17:01:42.521904643 +0000 UTC m=+1995.166451333" observedRunningTime="2026-01-22 17:01:43.474016391 +0000 UTC m=+1996.118563101" watchObservedRunningTime="2026-01-22 17:01:43.480037782 +0000 UTC m=+1996.124584472" Jan 22 17:01:44 crc kubenswrapper[4704]: I0122 17:01:44.074642 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:45 crc kubenswrapper[4704]: I0122 17:01:45.241924 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:46 crc kubenswrapper[4704]: I0122 17:01:46.469487 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:47 crc kubenswrapper[4704]: I0122 17:01:47.706418 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:48 crc kubenswrapper[4704]: I0122 17:01:48.786788 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:48 crc kubenswrapper[4704]: I0122 17:01:48.814696 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:48 crc kubenswrapper[4704]: I0122 17:01:48.953701 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.086430 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.086503 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.086554 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.087242 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbfd2dfdd7d5192b0d486e087debbb041d258bd9f348744c87a1d512ab989a16"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.087323 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://fbfd2dfdd7d5192b0d486e087debbb041d258bd9f348744c87a1d512ab989a16" gracePeriod=600 Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.501983 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="fbfd2dfdd7d5192b0d486e087debbb041d258bd9f348744c87a1d512ab989a16" exitCode=0 Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.502018 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"fbfd2dfdd7d5192b0d486e087debbb041d258bd9f348744c87a1d512ab989a16"} Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.502634 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd"} Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.502695 4704 scope.go:117] "RemoveContainer" containerID="3f4a52a78b4a181442a70ee6ccd06035e4db661ff704fa3afeb5315fe9384435" Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.503374 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:49 crc kubenswrapper[4704]: I0122 17:01:49.527614 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.131551 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_af5e06c7-250e-4c54-9adf-216fd10913ca/watcher-decision-engine/0.log" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.239480 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg"] Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.247280 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-qm2rg"] Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.313367 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher04da-account-delete-d99sx"] Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.314624 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.323691 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.332990 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher04da-account-delete-d99sx"] Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.375098 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1664925-da93-46ad-bbf2-8dd63718c453-operator-scripts\") pod \"watcher04da-account-delete-d99sx\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.375177 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfvd\" (UniqueName: \"kubernetes.io/projected/f1664925-da93-46ad-bbf2-8dd63718c453-kube-api-access-cjfvd\") pod \"watcher04da-account-delete-d99sx\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.384380 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.384668 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" containerName="watcher-applier" containerID="cri-o://cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076" gracePeriod=30 Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.394692 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.394959 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-kuttl-api-log" containerID="cri-o://710d67066b59525bf4a66854465e07cdc014f82c78a4ebe4b6a984b070cc168f" gracePeriod=30 Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.395088 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-api" containerID="cri-o://6fd9f84337d32aaec0b2259446873daf6a2a6b9ad3e832040170b6b25c3a23dd" gracePeriod=30 Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.476479 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1664925-da93-46ad-bbf2-8dd63718c453-operator-scripts\") pod \"watcher04da-account-delete-d99sx\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.476553 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfvd\" (UniqueName: \"kubernetes.io/projected/f1664925-da93-46ad-bbf2-8dd63718c453-kube-api-access-cjfvd\") pod \"watcher04da-account-delete-d99sx\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.477347 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1664925-da93-46ad-bbf2-8dd63718c453-operator-scripts\") pod \"watcher04da-account-delete-d99sx\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.501842 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfvd\" (UniqueName: \"kubernetes.io/projected/f1664925-da93-46ad-bbf2-8dd63718c453-kube-api-access-cjfvd\") pod \"watcher04da-account-delete-d99sx\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.530782 4704 generic.go:334] "Generic (PLEG): container finished" podID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerID="710d67066b59525bf4a66854465e07cdc014f82c78a4ebe4b6a984b070cc168f" exitCode=143 Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.531264 4704 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-fkbrv\" not found" Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.531660 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6b594a1-4164-40a3-8814-0fb00c3fb8b2","Type":"ContainerDied","Data":"710d67066b59525bf4a66854465e07cdc014f82c78a4ebe4b6a984b070cc168f"} Jan 22 17:01:50 crc kubenswrapper[4704]: E0122 17:01:50.578447 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:50 crc kubenswrapper[4704]: E0122 17:01:50.578531 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data podName:af5e06c7-250e-4c54-9adf-216fd10913ca nodeName:}" failed. No retries permitted until 2026-01-22 17:01:51.078511055 +0000 UTC m=+2003.723057755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:50 crc kubenswrapper[4704]: I0122 17:01:50.638205 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:51 crc kubenswrapper[4704]: E0122 17:01:51.084861 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:51 crc kubenswrapper[4704]: E0122 17:01:51.085409 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data podName:af5e06c7-250e-4c54-9adf-216fd10913ca nodeName:}" failed. No retries permitted until 2026-01-22 17:01:52.085390848 +0000 UTC m=+2004.729937548 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.298049 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher04da-account-delete-d99sx"] Jan 22 17:01:51 crc kubenswrapper[4704]: W0122 17:01:51.341902 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1664925_da93_46ad_bbf2_8dd63718c453.slice/crio-80cb207e4123ccbc389a029559e6dc07db988ec62e558d8cc0ab66364593dea5 WatchSource:0}: Error finding container 80cb207e4123ccbc389a029559e6dc07db988ec62e558d8cc0ab66364593dea5: Status 404 returned error can't find the container with id 80cb207e4123ccbc389a029559e6dc07db988ec62e558d8cc0ab66364593dea5 Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.550137 4704 generic.go:334] "Generic (PLEG): container finished" podID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerID="6fd9f84337d32aaec0b2259446873daf6a2a6b9ad3e832040170b6b25c3a23dd" exitCode=0 Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.550532 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6b594a1-4164-40a3-8814-0fb00c3fb8b2","Type":"ContainerDied","Data":"6fd9f84337d32aaec0b2259446873daf6a2a6b9ad3e832040170b6b25c3a23dd"} Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.550594 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6b594a1-4164-40a3-8814-0fb00c3fb8b2","Type":"ContainerDied","Data":"5d324ee52d71cbb139afccf9dbfffe2d888828ebd2894b11617e64e7abc50912"} Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.550612 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d324ee52d71cbb139afccf9dbfffe2d888828ebd2894b11617e64e7abc50912" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.556626 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" event={"ID":"f1664925-da93-46ad-bbf2-8dd63718c453","Type":"ContainerStarted","Data":"80cb207e4123ccbc389a029559e6dc07db988ec62e558d8cc0ab66364593dea5"} Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.556686 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="af5e06c7-250e-4c54-9adf-216fd10913ca" containerName="watcher-decision-engine" containerID="cri-o://f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888" gracePeriod=30 Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.643271 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26bb0aee-8347-4b52-b19d-ef0cd4d1a29e" path="/var/lib/kubelet/pods/26bb0aee-8347-4b52-b19d-ef0cd4d1a29e/volumes" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.724723 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.749196 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" podStartSLOduration=1.749177377 podStartE2EDuration="1.749177377s" podCreationTimestamp="2026-01-22 17:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:51.580192229 +0000 UTC m=+2004.224738929" watchObservedRunningTime="2026-01-22 17:01:51.749177377 +0000 UTC m=+2004.393724077" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.815886 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wfxj\" (UniqueName: \"kubernetes.io/projected/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-kube-api-access-8wfxj\") pod \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.815955 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-combined-ca-bundle\") pod \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.815992 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-custom-prometheus-ca\") pod \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.816119 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-logs\") pod \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.816160 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-config-data\") pod \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.816249 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-cert-memcached-mtls\") pod \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\" (UID: \"f6b594a1-4164-40a3-8814-0fb00c3fb8b2\") " Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.824144 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-logs" (OuterVolumeSpecName: "logs") pod "f6b594a1-4164-40a3-8814-0fb00c3fb8b2" (UID: "f6b594a1-4164-40a3-8814-0fb00c3fb8b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.824442 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-kube-api-access-8wfxj" (OuterVolumeSpecName: "kube-api-access-8wfxj") pod "f6b594a1-4164-40a3-8814-0fb00c3fb8b2" (UID: "f6b594a1-4164-40a3-8814-0fb00c3fb8b2"). InnerVolumeSpecName "kube-api-access-8wfxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.851209 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f6b594a1-4164-40a3-8814-0fb00c3fb8b2" (UID: "f6b594a1-4164-40a3-8814-0fb00c3fb8b2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.853041 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6b594a1-4164-40a3-8814-0fb00c3fb8b2" (UID: "f6b594a1-4164-40a3-8814-0fb00c3fb8b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.898944 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-config-data" (OuterVolumeSpecName: "config-data") pod "f6b594a1-4164-40a3-8814-0fb00c3fb8b2" (UID: "f6b594a1-4164-40a3-8814-0fb00c3fb8b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.902263 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f6b594a1-4164-40a3-8814-0fb00c3fb8b2" (UID: "f6b594a1-4164-40a3-8814-0fb00c3fb8b2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.917890 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.917928 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.917938 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.917948 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wfxj\" (UniqueName: \"kubernetes.io/projected/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-kube-api-access-8wfxj\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.917956 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.917965 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6b594a1-4164-40a3-8814-0fb00c3fb8b2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:51 crc kubenswrapper[4704]: I0122 17:01:51.955202 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.018808 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-combined-ca-bundle\") pod \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.018926 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5t8h\" (UniqueName: \"kubernetes.io/projected/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-kube-api-access-q5t8h\") pod \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.018982 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-cert-memcached-mtls\") pod \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.019009 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-logs\") pod \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.019031 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-config-data\") pod \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\" (UID: \"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95\") " Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.020063 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-logs" (OuterVolumeSpecName: "logs") pod "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" (UID: "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.024653 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-kube-api-access-q5t8h" (OuterVolumeSpecName: "kube-api-access-q5t8h") pod "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" (UID: "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95"). InnerVolumeSpecName "kube-api-access-q5t8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.046926 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" (UID: "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.073653 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-config-data" (OuterVolumeSpecName: "config-data") pod "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" (UID: "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.089339 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" (UID: "1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:52 crc kubenswrapper[4704]: E0122 17:01:52.121069 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:52 crc kubenswrapper[4704]: E0122 17:01:52.121163 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data podName:af5e06c7-250e-4c54-9adf-216fd10913ca nodeName:}" failed. No retries permitted until 2026-01-22 17:01:54.121141038 +0000 UTC m=+2006.765687798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.121071 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5t8h\" (UniqueName: \"kubernetes.io/projected/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-kube-api-access-q5t8h\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.121219 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.121230 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.121240 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.121248 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.564765 4704 generic.go:334] "Generic (PLEG): container finished" podID="f1664925-da93-46ad-bbf2-8dd63718c453" containerID="a2b90acfa945d7d70b151b80471df18e8b38d0be29a969ff3ec775c738f7bfc0" exitCode=0 Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.564859 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" event={"ID":"f1664925-da93-46ad-bbf2-8dd63718c453","Type":"ContainerDied","Data":"a2b90acfa945d7d70b151b80471df18e8b38d0be29a969ff3ec775c738f7bfc0"} Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.567455 4704 generic.go:334] "Generic (PLEG): container finished" podID="1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" containerID="cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076" exitCode=0 Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.567497 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95","Type":"ContainerDied","Data":"cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076"} Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.567530 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.567535 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.567558 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95","Type":"ContainerDied","Data":"8b90336c7b282fc4263faeca4b68b444442abebde58434f8ec784055897cb0b8"} Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.567571 4704 scope.go:117] "RemoveContainer" containerID="cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.598275 4704 scope.go:117] "RemoveContainer" containerID="cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076" Jan 22 17:01:52 crc kubenswrapper[4704]: E0122 17:01:52.598780 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076\": container with ID starting with cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076 not found: ID does not exist" containerID="cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.598952 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076"} err="failed to get container status \"cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076\": rpc error: code = NotFound desc = could not find container \"cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076\": container with ID starting with cea2fd6d47417afbe353e5bea5e6baaeea8a3db730453ee9a66577c6cb835076 not found: ID does not exist" Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.616522 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.623984 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.630943 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.638523 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.884487 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.885137 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-central-agent" containerID="cri-o://34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f" gracePeriod=30 Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.885182 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="proxy-httpd" containerID="cri-o://f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6" gracePeriod=30 Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.885199 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="sg-core" containerID="cri-o://2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda" gracePeriod=30 Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.885219 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-notification-agent" containerID="cri-o://f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c" gracePeriod=30 Jan 22 17:01:52 crc kubenswrapper[4704]: I0122 17:01:52.905085 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": EOF" Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.583289 4704 generic.go:334] "Generic (PLEG): container finished" podID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerID="f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6" exitCode=0 Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.583624 4704 generic.go:334] "Generic (PLEG): container finished" podID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerID="2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda" exitCode=2 Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.583638 4704 generic.go:334] "Generic (PLEG): container finished" podID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerID="34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f" exitCode=0 Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.583464 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerDied","Data":"f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6"} Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.583854 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerDied","Data":"2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda"} Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.583872 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerDied","Data":"34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f"} Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.646831 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" path="/var/lib/kubelet/pods/1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95/volumes" Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.647421 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" path="/var/lib/kubelet/pods/f6b594a1-4164-40a3-8814-0fb00c3fb8b2/volumes" Jan 22 17:01:53 crc kubenswrapper[4704]: I0122 17:01:53.978393 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.054436 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1664925-da93-46ad-bbf2-8dd63718c453-operator-scripts\") pod \"f1664925-da93-46ad-bbf2-8dd63718c453\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.054680 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfvd\" (UniqueName: \"kubernetes.io/projected/f1664925-da93-46ad-bbf2-8dd63718c453-kube-api-access-cjfvd\") pod \"f1664925-da93-46ad-bbf2-8dd63718c453\" (UID: \"f1664925-da93-46ad-bbf2-8dd63718c453\") " Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.055636 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1664925-da93-46ad-bbf2-8dd63718c453-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1664925-da93-46ad-bbf2-8dd63718c453" (UID: "f1664925-da93-46ad-bbf2-8dd63718c453"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.062988 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1664925-da93-46ad-bbf2-8dd63718c453-kube-api-access-cjfvd" (OuterVolumeSpecName: "kube-api-access-cjfvd") pod "f1664925-da93-46ad-bbf2-8dd63718c453" (UID: "f1664925-da93-46ad-bbf2-8dd63718c453"). InnerVolumeSpecName "kube-api-access-cjfvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.155986 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfvd\" (UniqueName: \"kubernetes.io/projected/f1664925-da93-46ad-bbf2-8dd63718c453-kube-api-access-cjfvd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.156018 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1664925-da93-46ad-bbf2-8dd63718c453-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:54 crc kubenswrapper[4704]: E0122 17:01:54.156145 4704 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:54 crc kubenswrapper[4704]: E0122 17:01:54.156278 4704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data podName:af5e06c7-250e-4c54-9adf-216fd10913ca nodeName:}" failed. No retries permitted until 2026-01-22 17:01:58.156245105 +0000 UTC m=+2010.800791845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.592549 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" event={"ID":"f1664925-da93-46ad-bbf2-8dd63718c453","Type":"ContainerDied","Data":"80cb207e4123ccbc389a029559e6dc07db988ec62e558d8cc0ab66364593dea5"} Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.592588 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher04da-account-delete-d99sx" Jan 22 17:01:54 crc kubenswrapper[4704]: I0122 17:01:54.592596 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80cb207e4123ccbc389a029559e6dc07db988ec62e558d8cc0ab66364593dea5" Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.345687 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zzt66"] Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.354969 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zzt66"] Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.362735 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-04da-account-create-update-f22kc"] Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.370358 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-04da-account-create-update-f22kc"] Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.378266 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher04da-account-delete-d99sx"] Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.384417 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher04da-account-delete-d99sx"] Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.653130 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f84d47-64ad-4221-99ef-6a439e6bd75b" path="/var/lib/kubelet/pods/53f84d47-64ad-4221-99ef-6a439e6bd75b/volumes" Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.654010 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd66d1c3-d3c5-43ce-b451-5d57d24df04b" path="/var/lib/kubelet/pods/bd66d1c3-d3c5-43ce-b451-5d57d24df04b/volumes" Jan 22 17:01:55 crc kubenswrapper[4704]: I0122 17:01:55.654736 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1664925-da93-46ad-bbf2-8dd63718c453" path="/var/lib/kubelet/pods/f1664925-da93-46ad-bbf2-8dd63718c453/volumes" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.481713 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.491508 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.507433 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-custom-prometheus-ca\") pod \"af5e06c7-250e-4c54-9adf-216fd10913ca\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.507592 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-cert-memcached-mtls\") pod \"af5e06c7-250e-4c54-9adf-216fd10913ca\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.507639 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af5e06c7-250e-4c54-9adf-216fd10913ca-logs\") pod \"af5e06c7-250e-4c54-9adf-216fd10913ca\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.507708 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmcdw\" (UniqueName: \"kubernetes.io/projected/af5e06c7-250e-4c54-9adf-216fd10913ca-kube-api-access-qmcdw\") pod \"af5e06c7-250e-4c54-9adf-216fd10913ca\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.507744 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-combined-ca-bundle\") pod \"af5e06c7-250e-4c54-9adf-216fd10913ca\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.507802 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data\") pod \"af5e06c7-250e-4c54-9adf-216fd10913ca\" (UID: \"af5e06c7-250e-4c54-9adf-216fd10913ca\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.508817 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5e06c7-250e-4c54-9adf-216fd10913ca-logs" (OuterVolumeSpecName: "logs") pod "af5e06c7-250e-4c54-9adf-216fd10913ca" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.516210 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5e06c7-250e-4c54-9adf-216fd10913ca-kube-api-access-qmcdw" (OuterVolumeSpecName: "kube-api-access-qmcdw") pod "af5e06c7-250e-4c54-9adf-216fd10913ca" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca"). InnerVolumeSpecName "kube-api-access-qmcdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.566404 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data" (OuterVolumeSpecName: "config-data") pod "af5e06c7-250e-4c54-9adf-216fd10913ca" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.572160 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "af5e06c7-250e-4c54-9adf-216fd10913ca" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.575318 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af5e06c7-250e-4c54-9adf-216fd10913ca" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609259 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-config-data\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609302 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-ceilometer-tls-certs\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609343 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-scripts\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609397 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-combined-ca-bundle\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609437 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9wpz\" (UniqueName: \"kubernetes.io/projected/8bcf7c3d-641c-4fb3-938b-3e840708623b-kube-api-access-n9wpz\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609487 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-log-httpd\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609566 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-sg-core-conf-yaml\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609613 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-run-httpd\") pod \"8bcf7c3d-641c-4fb3-938b-3e840708623b\" (UID: \"8bcf7c3d-641c-4fb3-938b-3e840708623b\") " Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609914 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609926 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af5e06c7-250e-4c54-9adf-216fd10913ca-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609935 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmcdw\" (UniqueName: \"kubernetes.io/projected/af5e06c7-250e-4c54-9adf-216fd10913ca-kube-api-access-qmcdw\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609946 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.609955 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.610277 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.610579 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.613158 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bcf7c3d-641c-4fb3-938b-3e840708623b-kube-api-access-n9wpz" (OuterVolumeSpecName: "kube-api-access-n9wpz") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "kube-api-access-n9wpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.613453 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-scripts" (OuterVolumeSpecName: "scripts") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.616006 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "af5e06c7-250e-4c54-9adf-216fd10913ca" (UID: "af5e06c7-250e-4c54-9adf-216fd10913ca"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.625498 4704 generic.go:334] "Generic (PLEG): container finished" podID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerID="f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c" exitCode=0 Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.625574 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.625576 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerDied","Data":"f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c"} Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.625700 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8bcf7c3d-641c-4fb3-938b-3e840708623b","Type":"ContainerDied","Data":"6c9536d5eb5bbea46015662f6320abe0bf3b0a9619739e8a3379649ce7efd7c7"} Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.625750 4704 scope.go:117] "RemoveContainer" containerID="f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.627250 4704 generic.go:334] "Generic (PLEG): container finished" podID="af5e06c7-250e-4c54-9adf-216fd10913ca" containerID="f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888" exitCode=0 Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.627291 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"af5e06c7-250e-4c54-9adf-216fd10913ca","Type":"ContainerDied","Data":"f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888"} Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.627309 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.627323 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"af5e06c7-250e-4c54-9adf-216fd10913ca","Type":"ContainerDied","Data":"22a1385e59c4fe5e4bbd57128fa65386aa054bef98f46a3192d3e3873b6295a9"} Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.636947 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.649192 4704 scope.go:117] "RemoveContainer" containerID="2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.665049 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.670824 4704 scope.go:117] "RemoveContainer" containerID="f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.671017 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.671242 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.681558 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.684818 4704 scope.go:117] "RemoveContainer" containerID="34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711380 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711416 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711429 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711441 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af5e06c7-250e-4c54-9adf-216fd10913ca-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711452 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711463 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9wpz\" (UniqueName: \"kubernetes.io/projected/8bcf7c3d-641c-4fb3-938b-3e840708623b-kube-api-access-n9wpz\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711473 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bcf7c3d-641c-4fb3-938b-3e840708623b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.711483 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.713240 4704 scope.go:117] "RemoveContainer" containerID="f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.713667 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6\": container with ID starting with f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6 not found: ID does not exist" containerID="f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.713698 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6"} err="failed to get container status \"f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6\": rpc error: code = NotFound desc = could not find container \"f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6\": container with ID starting with f8449cf570dc6aa4885e84f96452d989aaf1c5844ee0ff36ee0519545bab88e6 not found: ID does not exist" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.713718 4704 scope.go:117] "RemoveContainer" containerID="2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.714102 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda\": container with ID starting with 2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda not found: ID does not exist" containerID="2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.714126 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda"} err="failed to get container status \"2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda\": rpc error: code = NotFound desc = could not find container \"2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda\": container with ID starting with 2e0593a9161b2a75b03c010af498abe2bb8fdca8ed20ec1daede8b3ff03edcda not found: ID does not exist" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.714138 4704 scope.go:117] "RemoveContainer" containerID="f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.714313 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c\": container with ID starting with f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c not found: ID does not exist" containerID="f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.714335 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c"} err="failed to get container status \"f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c\": rpc error: code = NotFound desc = could not find container \"f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c\": container with ID starting with f5fb569fdeb542da8668e2c4838bead629b8a36a60c087b8b275df6853f94d6c not found: ID does not exist" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.714349 4704 scope.go:117] "RemoveContainer" containerID="34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.714551 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f\": container with ID starting with 34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f not found: ID does not exist" containerID="34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.714567 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f"} err="failed to get container status \"34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f\": rpc error: code = NotFound desc = could not find container \"34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f\": container with ID starting with 34738fe7f8ec4dd1a82c19cf916d44b9ae76653b6d366aa086e5780ea57b371f not found: ID does not exist" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.714579 4704 scope.go:117] "RemoveContainer" containerID="f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.723056 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-config-data" (OuterVolumeSpecName: "config-data") pod "8bcf7c3d-641c-4fb3-938b-3e840708623b" (UID: "8bcf7c3d-641c-4fb3-938b-3e840708623b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.730893 4704 scope.go:117] "RemoveContainer" containerID="f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.731306 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888\": container with ID starting with f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888 not found: ID does not exist" containerID="f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.731343 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888"} err="failed to get container status \"f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888\": rpc error: code = NotFound desc = could not find container \"f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888\": container with ID starting with f1f5894cce46b0ed12732b50be9a8e5d8f792155f8b34356b8963c8012ae4888 not found: ID does not exist" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.813463 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bcf7c3d-641c-4fb3-938b-3e840708623b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.966320 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.976864 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.995656 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996216 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-notification-agent" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996237 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-notification-agent" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996251 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="sg-core" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996259 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="sg-core" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996270 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="proxy-httpd" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996277 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="proxy-httpd" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996293 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-api" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996300 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-api" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996317 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1664925-da93-46ad-bbf2-8dd63718c453" containerName="mariadb-account-delete" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996323 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1664925-da93-46ad-bbf2-8dd63718c453" containerName="mariadb-account-delete" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996342 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5e06c7-250e-4c54-9adf-216fd10913ca" containerName="watcher-decision-engine" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996350 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5e06c7-250e-4c54-9adf-216fd10913ca" containerName="watcher-decision-engine" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996361 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-central-agent" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996368 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-central-agent" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996381 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-kuttl-api-log" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996388 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-kuttl-api-log" Jan 22 17:01:57 crc kubenswrapper[4704]: E0122 17:01:57.996399 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" containerName="watcher-applier" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996405 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" containerName="watcher-applier" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996576 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1664925-da93-46ad-bbf2-8dd63718c453" containerName="mariadb-account-delete" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996589 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="sg-core" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996604 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-central-agent" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996618 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5e06c7-250e-4c54-9adf-216fd10913ca" containerName="watcher-decision-engine" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996635 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="ceilometer-notification-agent" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996645 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" containerName="proxy-httpd" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996657 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-api" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996667 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8afb0e-6c6e-4e37-b6d7-057e6b8d8c95" containerName="watcher-applier" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.996678 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b594a1-4164-40a3-8814-0fb00c3fb8b2" containerName="watcher-kuttl-api-log" Jan 22 17:01:57 crc kubenswrapper[4704]: I0122 17:01:57.998570 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.007001 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.007011 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.009180 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.017512 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117166 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69nf7\" (UniqueName: \"kubernetes.io/projected/21d7fca0-3508-4e1d-a9b9-df6266aacd47-kube-api-access-69nf7\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117262 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-config-data\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117301 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-log-httpd\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117361 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-scripts\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117398 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117459 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117516 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-run-httpd\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.117529 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.218995 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-config-data\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219110 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-log-httpd\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219169 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-scripts\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219199 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219250 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219329 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-run-httpd\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219347 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219368 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69nf7\" (UniqueName: \"kubernetes.io/projected/21d7fca0-3508-4e1d-a9b9-df6266aacd47-kube-api-access-69nf7\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.219578 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-log-httpd\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.220031 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-run-httpd\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.222964 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.223541 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-config-data\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.224187 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.225331 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.227029 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-scripts\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.247754 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69nf7\" (UniqueName: \"kubernetes.io/projected/21d7fca0-3508-4e1d-a9b9-df6266aacd47-kube-api-access-69nf7\") pod \"ceilometer-0\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.372218 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.393114 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9"] Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.394476 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.400341 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.418899 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9"] Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.422763 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqhl\" (UniqueName: \"kubernetes.io/projected/d294965f-c653-4a77-8179-db182bf86a01-kube-api-access-glqhl\") pod \"watcher-8f85-account-create-update-rdqn9\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.422807 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d294965f-c653-4a77-8179-db182bf86a01-operator-scripts\") pod \"watcher-8f85-account-create-update-rdqn9\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.427907 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-54bdh"] Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.429587 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.452866 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-54bdh"] Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.526994 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glqhl\" (UniqueName: \"kubernetes.io/projected/d294965f-c653-4a77-8179-db182bf86a01-kube-api-access-glqhl\") pod \"watcher-8f85-account-create-update-rdqn9\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.527065 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d294965f-c653-4a77-8179-db182bf86a01-operator-scripts\") pod \"watcher-8f85-account-create-update-rdqn9\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.527103 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvh8d\" (UniqueName: \"kubernetes.io/projected/cac38922-ed14-435c-8b0a-21fb1d6eb922-kube-api-access-wvh8d\") pod \"watcher-db-create-54bdh\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.527146 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac38922-ed14-435c-8b0a-21fb1d6eb922-operator-scripts\") pod \"watcher-db-create-54bdh\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.528667 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d294965f-c653-4a77-8179-db182bf86a01-operator-scripts\") pod \"watcher-8f85-account-create-update-rdqn9\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.576152 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqhl\" (UniqueName: \"kubernetes.io/projected/d294965f-c653-4a77-8179-db182bf86a01-kube-api-access-glqhl\") pod \"watcher-8f85-account-create-update-rdqn9\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.628480 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvh8d\" (UniqueName: \"kubernetes.io/projected/cac38922-ed14-435c-8b0a-21fb1d6eb922-kube-api-access-wvh8d\") pod \"watcher-db-create-54bdh\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.628537 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac38922-ed14-435c-8b0a-21fb1d6eb922-operator-scripts\") pod \"watcher-db-create-54bdh\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.629474 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac38922-ed14-435c-8b0a-21fb1d6eb922-operator-scripts\") pod \"watcher-db-create-54bdh\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.666637 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvh8d\" (UniqueName: \"kubernetes.io/projected/cac38922-ed14-435c-8b0a-21fb1d6eb922-kube-api-access-wvh8d\") pod \"watcher-db-create-54bdh\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.818122 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.825315 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:01:58 crc kubenswrapper[4704]: I0122 17:01:58.869341 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.371293 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9"] Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.470120 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-54bdh"] Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.646594 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bcf7c3d-641c-4fb3-938b-3e840708623b" path="/var/lib/kubelet/pods/8bcf7c3d-641c-4fb3-938b-3e840708623b/volumes" Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.647874 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5e06c7-250e-4c54-9adf-216fd10913ca" path="/var/lib/kubelet/pods/af5e06c7-250e-4c54-9adf-216fd10913ca/volumes" Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.725003 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-54bdh" event={"ID":"cac38922-ed14-435c-8b0a-21fb1d6eb922","Type":"ContainerStarted","Data":"db1b858aa60e90725a1490624e98114fcfe3bb7cadd6a0e18f5b83b1f84ac7ff"} Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.725149 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-54bdh" event={"ID":"cac38922-ed14-435c-8b0a-21fb1d6eb922","Type":"ContainerStarted","Data":"5741d348c2dceb40988cbe4a433df5248dd2efd7625e2c755860b721323b182c"} Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.728004 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerStarted","Data":"bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f"} Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.728040 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerStarted","Data":"23fe7886e1e4e728c909df827e4b20f090abd7a545700029ee1ba31e9ea07f35"} Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.729101 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" event={"ID":"d294965f-c653-4a77-8179-db182bf86a01","Type":"ContainerStarted","Data":"e85e235192df04e7ae1ec120e064f20c67e6e63d2ef7d011aa9e6df39722b424"} Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.729127 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" event={"ID":"d294965f-c653-4a77-8179-db182bf86a01","Type":"ContainerStarted","Data":"bdc399dfed0de332be637b30962cba1e6a431b457656e9cc7515c9afd245d5a3"} Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.742673 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-54bdh" podStartSLOduration=1.742658714 podStartE2EDuration="1.742658714s" podCreationTimestamp="2026-01-22 17:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:59.740706669 +0000 UTC m=+2012.385253369" watchObservedRunningTime="2026-01-22 17:01:59.742658714 +0000 UTC m=+2012.387205414" Jan 22 17:01:59 crc kubenswrapper[4704]: I0122 17:01:59.761191 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" podStartSLOduration=1.761173399 podStartE2EDuration="1.761173399s" podCreationTimestamp="2026-01-22 17:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:01:59.75768146 +0000 UTC m=+2012.402228160" watchObservedRunningTime="2026-01-22 17:01:59.761173399 +0000 UTC m=+2012.405720089" Jan 22 17:02:00 crc kubenswrapper[4704]: I0122 17:02:00.740976 4704 generic.go:334] "Generic (PLEG): container finished" podID="d294965f-c653-4a77-8179-db182bf86a01" containerID="e85e235192df04e7ae1ec120e064f20c67e6e63d2ef7d011aa9e6df39722b424" exitCode=0 Jan 22 17:02:00 crc kubenswrapper[4704]: I0122 17:02:00.741044 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" event={"ID":"d294965f-c653-4a77-8179-db182bf86a01","Type":"ContainerDied","Data":"e85e235192df04e7ae1ec120e064f20c67e6e63d2ef7d011aa9e6df39722b424"} Jan 22 17:02:00 crc kubenswrapper[4704]: I0122 17:02:00.751504 4704 generic.go:334] "Generic (PLEG): container finished" podID="cac38922-ed14-435c-8b0a-21fb1d6eb922" containerID="db1b858aa60e90725a1490624e98114fcfe3bb7cadd6a0e18f5b83b1f84ac7ff" exitCode=0 Jan 22 17:02:00 crc kubenswrapper[4704]: I0122 17:02:00.751641 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-54bdh" event={"ID":"cac38922-ed14-435c-8b0a-21fb1d6eb922","Type":"ContainerDied","Data":"db1b858aa60e90725a1490624e98114fcfe3bb7cadd6a0e18f5b83b1f84ac7ff"} Jan 22 17:02:00 crc kubenswrapper[4704]: I0122 17:02:00.763324 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerStarted","Data":"534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2"} Jan 22 17:02:01 crc kubenswrapper[4704]: I0122 17:02:01.773984 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerStarted","Data":"8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe"} Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.214695 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.295351 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac38922-ed14-435c-8b0a-21fb1d6eb922-operator-scripts\") pod \"cac38922-ed14-435c-8b0a-21fb1d6eb922\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.295483 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvh8d\" (UniqueName: \"kubernetes.io/projected/cac38922-ed14-435c-8b0a-21fb1d6eb922-kube-api-access-wvh8d\") pod \"cac38922-ed14-435c-8b0a-21fb1d6eb922\" (UID: \"cac38922-ed14-435c-8b0a-21fb1d6eb922\") " Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.296989 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac38922-ed14-435c-8b0a-21fb1d6eb922-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cac38922-ed14-435c-8b0a-21fb1d6eb922" (UID: "cac38922-ed14-435c-8b0a-21fb1d6eb922"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.302003 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac38922-ed14-435c-8b0a-21fb1d6eb922-kube-api-access-wvh8d" (OuterVolumeSpecName: "kube-api-access-wvh8d") pod "cac38922-ed14-435c-8b0a-21fb1d6eb922" (UID: "cac38922-ed14-435c-8b0a-21fb1d6eb922"). InnerVolumeSpecName "kube-api-access-wvh8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.343262 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.397325 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glqhl\" (UniqueName: \"kubernetes.io/projected/d294965f-c653-4a77-8179-db182bf86a01-kube-api-access-glqhl\") pod \"d294965f-c653-4a77-8179-db182bf86a01\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.397406 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d294965f-c653-4a77-8179-db182bf86a01-operator-scripts\") pod \"d294965f-c653-4a77-8179-db182bf86a01\" (UID: \"d294965f-c653-4a77-8179-db182bf86a01\") " Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.397666 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac38922-ed14-435c-8b0a-21fb1d6eb922-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.397683 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvh8d\" (UniqueName: \"kubernetes.io/projected/cac38922-ed14-435c-8b0a-21fb1d6eb922-kube-api-access-wvh8d\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.398130 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d294965f-c653-4a77-8179-db182bf86a01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d294965f-c653-4a77-8179-db182bf86a01" (UID: "d294965f-c653-4a77-8179-db182bf86a01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.401321 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d294965f-c653-4a77-8179-db182bf86a01-kube-api-access-glqhl" (OuterVolumeSpecName: "kube-api-access-glqhl") pod "d294965f-c653-4a77-8179-db182bf86a01" (UID: "d294965f-c653-4a77-8179-db182bf86a01"). InnerVolumeSpecName "kube-api-access-glqhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.499573 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glqhl\" (UniqueName: \"kubernetes.io/projected/d294965f-c653-4a77-8179-db182bf86a01-kube-api-access-glqhl\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.499617 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d294965f-c653-4a77-8179-db182bf86a01-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.782610 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.782616 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9" event={"ID":"d294965f-c653-4a77-8179-db182bf86a01","Type":"ContainerDied","Data":"bdc399dfed0de332be637b30962cba1e6a431b457656e9cc7515c9afd245d5a3"} Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.782768 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc399dfed0de332be637b30962cba1e6a431b457656e9cc7515c9afd245d5a3" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.784370 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-54bdh" event={"ID":"cac38922-ed14-435c-8b0a-21fb1d6eb922","Type":"ContainerDied","Data":"5741d348c2dceb40988cbe4a433df5248dd2efd7625e2c755860b721323b182c"} Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.784399 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-54bdh" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.784481 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5741d348c2dceb40988cbe4a433df5248dd2efd7625e2c755860b721323b182c" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.787027 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerStarted","Data":"224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa"} Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.787186 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:02 crc kubenswrapper[4704]: I0122 17:02:02.815500 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.776790112 podStartE2EDuration="5.815480577s" podCreationTimestamp="2026-01-22 17:01:57 +0000 UTC" firstStartedPulling="2026-01-22 17:01:58.845550723 +0000 UTC m=+2011.490097433" lastFinishedPulling="2026-01-22 17:02:01.884241198 +0000 UTC m=+2014.528787898" observedRunningTime="2026-01-22 17:02:02.811927226 +0000 UTC m=+2015.456473946" watchObservedRunningTime="2026-01-22 17:02:02.815480577 +0000 UTC m=+2015.460027277" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.750061 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr"] Jan 22 17:02:03 crc kubenswrapper[4704]: E0122 17:02:03.750916 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac38922-ed14-435c-8b0a-21fb1d6eb922" containerName="mariadb-database-create" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.751044 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac38922-ed14-435c-8b0a-21fb1d6eb922" containerName="mariadb-database-create" Jan 22 17:02:03 crc kubenswrapper[4704]: E0122 17:02:03.751131 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d294965f-c653-4a77-8179-db182bf86a01" containerName="mariadb-account-create-update" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.751205 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="d294965f-c653-4a77-8179-db182bf86a01" containerName="mariadb-account-create-update" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.751431 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac38922-ed14-435c-8b0a-21fb1d6eb922" containerName="mariadb-database-create" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.751496 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="d294965f-c653-4a77-8179-db182bf86a01" containerName="mariadb-account-create-update" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.752088 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.754904 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cc74g" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.759450 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.764769 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr"] Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.835659 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-config-data\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.835734 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4spxp\" (UniqueName: \"kubernetes.io/projected/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-kube-api-access-4spxp\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.835816 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.835900 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.937294 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-config-data\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.937364 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4spxp\" (UniqueName: \"kubernetes.io/projected/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-kube-api-access-4spxp\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.937414 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.937463 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.941927 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.942292 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.952826 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-config-data\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:03 crc kubenswrapper[4704]: I0122 17:02:03.959603 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4spxp\" (UniqueName: \"kubernetes.io/projected/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-kube-api-access-4spxp\") pod \"watcher-kuttl-db-sync-jt2gr\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:04 crc kubenswrapper[4704]: I0122 17:02:04.070952 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:04 crc kubenswrapper[4704]: I0122 17:02:04.532486 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr"] Jan 22 17:02:04 crc kubenswrapper[4704]: I0122 17:02:04.810423 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" event={"ID":"6f50820d-ea55-40a8-8d2f-f03dd95edf2a","Type":"ContainerStarted","Data":"3bd28cb90b5e51d8b693d43ca0ca83e3603d3ec4676d285194b3e550c178cabe"} Jan 22 17:02:04 crc kubenswrapper[4704]: I0122 17:02:04.810713 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" event={"ID":"6f50820d-ea55-40a8-8d2f-f03dd95edf2a","Type":"ContainerStarted","Data":"9880795182f8e92cdc119eb7ed8dd22f302563081edd1d84cde5b895bd0b7116"} Jan 22 17:02:04 crc kubenswrapper[4704]: I0122 17:02:04.833727 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" podStartSLOduration=1.8337117059999999 podStartE2EDuration="1.833711706s" podCreationTimestamp="2026-01-22 17:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:04.825469472 +0000 UTC m=+2017.470016182" watchObservedRunningTime="2026-01-22 17:02:04.833711706 +0000 UTC m=+2017.478258406" Jan 22 17:02:07 crc kubenswrapper[4704]: I0122 17:02:07.836255 4704 generic.go:334] "Generic (PLEG): container finished" podID="6f50820d-ea55-40a8-8d2f-f03dd95edf2a" containerID="3bd28cb90b5e51d8b693d43ca0ca83e3603d3ec4676d285194b3e550c178cabe" exitCode=0 Jan 22 17:02:07 crc kubenswrapper[4704]: I0122 17:02:07.836350 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" event={"ID":"6f50820d-ea55-40a8-8d2f-f03dd95edf2a","Type":"ContainerDied","Data":"3bd28cb90b5e51d8b693d43ca0ca83e3603d3ec4676d285194b3e550c178cabe"} Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.328827 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.426101 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-db-sync-config-data\") pod \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.426569 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4spxp\" (UniqueName: \"kubernetes.io/projected/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-kube-api-access-4spxp\") pod \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.426738 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-config-data\") pod \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.426816 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-combined-ca-bundle\") pod \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\" (UID: \"6f50820d-ea55-40a8-8d2f-f03dd95edf2a\") " Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.437053 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-kube-api-access-4spxp" (OuterVolumeSpecName: "kube-api-access-4spxp") pod "6f50820d-ea55-40a8-8d2f-f03dd95edf2a" (UID: "6f50820d-ea55-40a8-8d2f-f03dd95edf2a"). InnerVolumeSpecName "kube-api-access-4spxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.462899 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6f50820d-ea55-40a8-8d2f-f03dd95edf2a" (UID: "6f50820d-ea55-40a8-8d2f-f03dd95edf2a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.463969 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f50820d-ea55-40a8-8d2f-f03dd95edf2a" (UID: "6f50820d-ea55-40a8-8d2f-f03dd95edf2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.499829 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-config-data" (OuterVolumeSpecName: "config-data") pod "6f50820d-ea55-40a8-8d2f-f03dd95edf2a" (UID: "6f50820d-ea55-40a8-8d2f-f03dd95edf2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.528944 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.528989 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.529003 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.529012 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4spxp\" (UniqueName: \"kubernetes.io/projected/6f50820d-ea55-40a8-8d2f-f03dd95edf2a-kube-api-access-4spxp\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.852850 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" event={"ID":"6f50820d-ea55-40a8-8d2f-f03dd95edf2a","Type":"ContainerDied","Data":"9880795182f8e92cdc119eb7ed8dd22f302563081edd1d84cde5b895bd0b7116"} Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.853103 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9880795182f8e92cdc119eb7ed8dd22f302563081edd1d84cde5b895bd0b7116" Jan 22 17:02:09 crc kubenswrapper[4704]: I0122 17:02:09.852897 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.099159 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:02:10 crc kubenswrapper[4704]: E0122 17:02:10.099721 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f50820d-ea55-40a8-8d2f-f03dd95edf2a" containerName="watcher-kuttl-db-sync" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.099743 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f50820d-ea55-40a8-8d2f-f03dd95edf2a" containerName="watcher-kuttl-db-sync" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.099931 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f50820d-ea55-40a8-8d2f-f03dd95edf2a" containerName="watcher-kuttl-db-sync" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.101094 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.104885 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.105176 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cc74g" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.116343 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.118204 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.130403 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.139963 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.142095 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.147768 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.148833 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.163867 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.239821 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.239871 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc49d8ed-4894-434a-9d50-46836567ff38-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.239894 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a44497-9095-48b7-a2cb-958c6445a2ca-logs\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.239913 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.239929 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.239975 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b87l7\" (UniqueName: \"kubernetes.io/projected/4b7580e9-29bf-40fa-9e68-af6b0c56d644-kube-api-access-b87l7\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240001 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240042 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240068 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7580e9-29bf-40fa-9e68-af6b0c56d644-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240087 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8mf\" (UniqueName: \"kubernetes.io/projected/dc49d8ed-4894-434a-9d50-46836567ff38-kube-api-access-pw8mf\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240119 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240140 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240158 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240180 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240225 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240266 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vm94\" (UniqueName: \"kubernetes.io/projected/15a44497-9095-48b7-a2cb-958c6445a2ca-kube-api-access-8vm94\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.240290 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.251428 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.253500 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.255400 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.269954 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341423 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341466 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341487 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341506 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341531 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341568 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vm94\" (UniqueName: \"kubernetes.io/projected/15a44497-9095-48b7-a2cb-958c6445a2ca-kube-api-access-8vm94\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341586 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341622 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341640 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc49d8ed-4894-434a-9d50-46836567ff38-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341654 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a44497-9095-48b7-a2cb-958c6445a2ca-logs\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341673 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341687 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341735 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b87l7\" (UniqueName: \"kubernetes.io/projected/4b7580e9-29bf-40fa-9e68-af6b0c56d644-kube-api-access-b87l7\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341761 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341778 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341817 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7580e9-29bf-40fa-9e68-af6b0c56d644-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.341836 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw8mf\" (UniqueName: \"kubernetes.io/projected/dc49d8ed-4894-434a-9d50-46836567ff38-kube-api-access-pw8mf\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.343179 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a44497-9095-48b7-a2cb-958c6445a2ca-logs\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.344507 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7580e9-29bf-40fa-9e68-af6b0c56d644-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.345302 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc49d8ed-4894-434a-9d50-46836567ff38-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.347138 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.348162 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.348741 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.349177 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.349653 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.352424 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.353745 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.354598 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.355240 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.357318 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.358915 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw8mf\" (UniqueName: \"kubernetes.io/projected/dc49d8ed-4894-434a-9d50-46836567ff38-kube-api-access-pw8mf\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.359689 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b87l7\" (UniqueName: \"kubernetes.io/projected/4b7580e9-29bf-40fa-9e68-af6b0c56d644-kube-api-access-b87l7\") pod \"watcher-kuttl-api-0\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.360819 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.361313 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vm94\" (UniqueName: \"kubernetes.io/projected/15a44497-9095-48b7-a2cb-958c6445a2ca-kube-api-access-8vm94\") pod \"watcher-kuttl-api-1\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.416129 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.433980 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.442824 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.443177 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.443213 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.443236 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lgx\" (UniqueName: \"kubernetes.io/projected/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-kube-api-access-q8lgx\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.443273 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.443383 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.458207 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.546464 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.546601 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.546627 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.546658 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.546685 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lgx\" (UniqueName: \"kubernetes.io/projected/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-kube-api-access-q8lgx\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.546761 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.549421 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.552193 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.554377 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.555500 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.558919 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.571710 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lgx\" (UniqueName: \"kubernetes.io/projected/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-kube-api-access-q8lgx\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.585660 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.922263 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:02:10 crc kubenswrapper[4704]: W0122 17:02:10.923480 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b7580e9_29bf_40fa_9e68_af6b0c56d644.slice/crio-fa6ceae0928c072a29fb3af1524a796ffbd06f1a4e783b0b0b9514fb33a79331 WatchSource:0}: Error finding container fa6ceae0928c072a29fb3af1524a796ffbd06f1a4e783b0b0b9514fb33a79331: Status 404 returned error can't find the container with id fa6ceae0928c072a29fb3af1524a796ffbd06f1a4e783b0b0b9514fb33a79331 Jan 22 17:02:10 crc kubenswrapper[4704]: I0122 17:02:10.996900 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:02:11 crc kubenswrapper[4704]: W0122 17:02:11.000045 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a44497_9095_48b7_a2cb_958c6445a2ca.slice/crio-4de36e3d203d8cc028a56c2e9e48df1285a2706aea33efceb9dea119d78e250e WatchSource:0}: Error finding container 4de36e3d203d8cc028a56c2e9e48df1285a2706aea33efceb9dea119d78e250e: Status 404 returned error can't find the container with id 4de36e3d203d8cc028a56c2e9e48df1285a2706aea33efceb9dea119d78e250e Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.005805 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.138984 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:02:11 crc kubenswrapper[4704]: W0122 17:02:11.156864 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf001d4a9_ce4d_49fb_841e_0b51831c4ae2.slice/crio-7b4fd5863464b5929239b877bf35fd7ad354ec5296851224d65ca3d89789bd09 WatchSource:0}: Error finding container 7b4fd5863464b5929239b877bf35fd7ad354ec5296851224d65ca3d89789bd09: Status 404 returned error can't find the container with id 7b4fd5863464b5929239b877bf35fd7ad354ec5296851224d65ca3d89789bd09 Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.886137 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b7580e9-29bf-40fa-9e68-af6b0c56d644","Type":"ContainerStarted","Data":"bf18264520e0c37a17d9afb6f629998365b35242876df7dcfe3fed2a12ece7a6"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.886539 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b7580e9-29bf-40fa-9e68-af6b0c56d644","Type":"ContainerStarted","Data":"6dd1eac675f0cce75c2248f35fa22ec364b9ab5562a6bd3051e7879b816c0925"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.886555 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b7580e9-29bf-40fa-9e68-af6b0c56d644","Type":"ContainerStarted","Data":"fa6ceae0928c072a29fb3af1524a796ffbd06f1a4e783b0b0b9514fb33a79331"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.888041 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.894779 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f001d4a9-ce4d-49fb-841e-0b51831c4ae2","Type":"ContainerStarted","Data":"49018b88851f8556a5d3116ef4c09aeb76bc8da79457c4b4a0be79d34d1ba8ea"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.895131 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f001d4a9-ce4d-49fb-841e-0b51831c4ae2","Type":"ContainerStarted","Data":"7b4fd5863464b5929239b877bf35fd7ad354ec5296851224d65ca3d89789bd09"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.899655 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc49d8ed-4894-434a-9d50-46836567ff38","Type":"ContainerStarted","Data":"9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.899708 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc49d8ed-4894-434a-9d50-46836567ff38","Type":"ContainerStarted","Data":"b6aa071d90dfe2783ddbde141177f4acf561f67dd995b5099bc219db4b590093"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.906019 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15a44497-9095-48b7-a2cb-958c6445a2ca","Type":"ContainerStarted","Data":"9bb4e287e9121ddfe9b035fe020627e15e58ab6cf533ce6ec9e1f98eed37c52f"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.906118 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15a44497-9095-48b7-a2cb-958c6445a2ca","Type":"ContainerStarted","Data":"59a9b69c09b8a35d777063ddb087f9ccd0ec4f0f87142fb129e724080190592a"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.906135 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15a44497-9095-48b7-a2cb-958c6445a2ca","Type":"ContainerStarted","Data":"4de36e3d203d8cc028a56c2e9e48df1285a2706aea33efceb9dea119d78e250e"} Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.908761 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.937373 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.937357437 podStartE2EDuration="1.937357437s" podCreationTimestamp="2026-01-22 17:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:11.932980833 +0000 UTC m=+2024.577527533" watchObservedRunningTime="2026-01-22 17:02:11.937357437 +0000 UTC m=+2024.581904127" Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.977220 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.977199736 podStartE2EDuration="1.977199736s" podCreationTimestamp="2026-01-22 17:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:11.956666824 +0000 UTC m=+2024.601213514" watchObservedRunningTime="2026-01-22 17:02:11.977199736 +0000 UTC m=+2024.621746446" Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.981625 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=1.9816063609999999 podStartE2EDuration="1.981606361s" podCreationTimestamp="2026-01-22 17:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:11.971478954 +0000 UTC m=+2024.616025654" watchObservedRunningTime="2026-01-22 17:02:11.981606361 +0000 UTC m=+2024.626153061" Jan 22 17:02:11 crc kubenswrapper[4704]: I0122 17:02:11.997939 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.9979216530000001 podStartE2EDuration="1.997921653s" podCreationTimestamp="2026-01-22 17:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:11.993656863 +0000 UTC m=+2024.638203563" watchObservedRunningTime="2026-01-22 17:02:11.997921653 +0000 UTC m=+2024.642468343" Jan 22 17:02:13 crc kubenswrapper[4704]: I0122 17:02:13.921900 4704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 17:02:13 crc kubenswrapper[4704]: I0122 17:02:13.921915 4704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 17:02:14 crc kubenswrapper[4704]: I0122 17:02:14.137173 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:14 crc kubenswrapper[4704]: I0122 17:02:14.261567 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:15 crc kubenswrapper[4704]: I0122 17:02:15.417046 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:15 crc kubenswrapper[4704]: I0122 17:02:15.434947 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:15 crc kubenswrapper[4704]: I0122 17:02:15.458647 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.416949 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.422342 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.434309 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.453757 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.459330 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.494716 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.585764 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.611506 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:20 crc kubenswrapper[4704]: I0122 17:02:20.997200 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:21 crc kubenswrapper[4704]: I0122 17:02:21.001340 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:21 crc kubenswrapper[4704]: I0122 17:02:21.001419 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:21 crc kubenswrapper[4704]: I0122 17:02:21.044609 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:21 crc kubenswrapper[4704]: I0122 17:02:21.058157 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:23 crc kubenswrapper[4704]: I0122 17:02:23.247256 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:23 crc kubenswrapper[4704]: I0122 17:02:23.248218 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-central-agent" containerID="cri-o://bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f" gracePeriod=30 Jan 22 17:02:23 crc kubenswrapper[4704]: I0122 17:02:23.249365 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="proxy-httpd" containerID="cri-o://224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa" gracePeriod=30 Jan 22 17:02:23 crc kubenswrapper[4704]: I0122 17:02:23.249432 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="sg-core" containerID="cri-o://8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe" gracePeriod=30 Jan 22 17:02:23 crc kubenswrapper[4704]: I0122 17:02:23.249476 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-notification-agent" containerID="cri-o://534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2" gracePeriod=30 Jan 22 17:02:23 crc kubenswrapper[4704]: I0122 17:02:23.297578 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.203:3000/\": EOF" Jan 22 17:02:24 crc kubenswrapper[4704]: I0122 17:02:24.019898 4704 generic.go:334] "Generic (PLEG): container finished" podID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerID="224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa" exitCode=0 Jan 22 17:02:24 crc kubenswrapper[4704]: I0122 17:02:24.020239 4704 generic.go:334] "Generic (PLEG): container finished" podID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerID="8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe" exitCode=2 Jan 22 17:02:24 crc kubenswrapper[4704]: I0122 17:02:24.020251 4704 generic.go:334] "Generic (PLEG): container finished" podID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerID="bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f" exitCode=0 Jan 22 17:02:24 crc kubenswrapper[4704]: I0122 17:02:24.019973 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerDied","Data":"224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa"} Jan 22 17:02:24 crc kubenswrapper[4704]: I0122 17:02:24.020287 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerDied","Data":"8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe"} Jan 22 17:02:24 crc kubenswrapper[4704]: I0122 17:02:24.020301 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerDied","Data":"bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f"} Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.497606 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.619479 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-scripts\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.619530 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69nf7\" (UniqueName: \"kubernetes.io/projected/21d7fca0-3508-4e1d-a9b9-df6266aacd47-kube-api-access-69nf7\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.619585 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-run-httpd\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.619627 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-log-httpd\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.620449 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.620614 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.620772 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-ceilometer-tls-certs\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.620810 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-sg-core-conf-yaml\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.621356 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-combined-ca-bundle\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.621863 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-config-data\") pod \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\" (UID: \"21d7fca0-3508-4e1d-a9b9-df6266aacd47\") " Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.623678 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.623704 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21d7fca0-3508-4e1d-a9b9-df6266aacd47-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.647976 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d7fca0-3508-4e1d-a9b9-df6266aacd47-kube-api-access-69nf7" (OuterVolumeSpecName: "kube-api-access-69nf7") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "kube-api-access-69nf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.648076 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-scripts" (OuterVolumeSpecName: "scripts") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.652168 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.681761 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.721907 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.722150 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-config-data" (OuterVolumeSpecName: "config-data") pod "21d7fca0-3508-4e1d-a9b9-df6266aacd47" (UID: "21d7fca0-3508-4e1d-a9b9-df6266aacd47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.727262 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.727298 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.727310 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.727323 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69nf7\" (UniqueName: \"kubernetes.io/projected/21d7fca0-3508-4e1d-a9b9-df6266aacd47-kube-api-access-69nf7\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.727336 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:26 crc kubenswrapper[4704]: I0122 17:02:26.727346 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21d7fca0-3508-4e1d-a9b9-df6266aacd47-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.047184 4704 generic.go:334] "Generic (PLEG): container finished" podID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerID="534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2" exitCode=0 Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.047255 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerDied","Data":"534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2"} Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.047290 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.047315 4704 scope.go:117] "RemoveContainer" containerID="224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.047301 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"21d7fca0-3508-4e1d-a9b9-df6266aacd47","Type":"ContainerDied","Data":"23fe7886e1e4e728c909df827e4b20f090abd7a545700029ee1ba31e9ea07f35"} Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.083242 4704 scope.go:117] "RemoveContainer" containerID="8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.100191 4704 scope.go:117] "RemoveContainer" containerID="534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.119281 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.126013 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.137623 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.139284 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-notification-agent" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139325 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-notification-agent" Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.139340 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-central-agent" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139350 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-central-agent" Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.139367 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="proxy-httpd" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139375 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="proxy-httpd" Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.139397 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="sg-core" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139405 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="sg-core" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139612 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-notification-agent" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139637 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="ceilometer-central-agent" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139654 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="proxy-httpd" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.139673 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" containerName="sg-core" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.141511 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.143101 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.143497 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.145917 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.159592 4704 scope.go:117] "RemoveContainer" containerID="bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.167012 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.189654 4704 scope.go:117] "RemoveContainer" containerID="224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa" Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.190237 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa\": container with ID starting with 224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa not found: ID does not exist" containerID="224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.190282 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa"} err="failed to get container status \"224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa\": rpc error: code = NotFound desc = could not find container \"224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa\": container with ID starting with 224439276c73b7243720d23a706d86bfd95074cefd5e09c07a65153104ba06aa not found: ID does not exist" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.190314 4704 scope.go:117] "RemoveContainer" containerID="8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe" Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.190578 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe\": container with ID starting with 8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe not found: ID does not exist" containerID="8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.190598 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe"} err="failed to get container status \"8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe\": rpc error: code = NotFound desc = could not find container \"8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe\": container with ID starting with 8f27b140e976d7db5f2fbf75856d2469f3ac78411c9e10e3662ea9e52a3184fe not found: ID does not exist" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.190615 4704 scope.go:117] "RemoveContainer" containerID="534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2" Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.190883 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2\": container with ID starting with 534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2 not found: ID does not exist" containerID="534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.190912 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2"} err="failed to get container status \"534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2\": rpc error: code = NotFound desc = could not find container \"534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2\": container with ID starting with 534cbf6c287329058f1c515dd34d241d286d52d5481ad79ca4a6985d18570bb2 not found: ID does not exist" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.190935 4704 scope.go:117] "RemoveContainer" containerID="bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f" Jan 22 17:02:27 crc kubenswrapper[4704]: E0122 17:02:27.191537 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f\": container with ID starting with bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f not found: ID does not exist" containerID="bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.191552 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f"} err="failed to get container status \"bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f\": rpc error: code = NotFound desc = could not find container \"bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f\": container with ID starting with bbcd638b4b6ca237da6520916c99a9507ce8ba9bd13782059f2e5f9c0fe7571f not found: ID does not exist" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235266 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235312 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bbvt\" (UniqueName: \"kubernetes.io/projected/2be33c17-af62-4139-a650-e2257ae6ef3e-kube-api-access-2bbvt\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235341 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235479 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-run-httpd\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235523 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-config-data\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235607 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-log-httpd\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235654 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-scripts\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.235674 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337483 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bbvt\" (UniqueName: \"kubernetes.io/projected/2be33c17-af62-4139-a650-e2257ae6ef3e-kube-api-access-2bbvt\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337535 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337555 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-run-httpd\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337573 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-config-data\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337624 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-log-httpd\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337656 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-scripts\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337674 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.337723 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.338318 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-run-httpd\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.338453 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-log-httpd\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.345714 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.345870 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-config-data\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.346452 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.347393 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.358664 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-scripts\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.368618 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bbvt\" (UniqueName: \"kubernetes.io/projected/2be33c17-af62-4139-a650-e2257ae6ef3e-kube-api-access-2bbvt\") pod \"ceilometer-0\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.458094 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.648899 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d7fca0-3508-4e1d-a9b9-df6266aacd47" path="/var/lib/kubelet/pods/21d7fca0-3508-4e1d-a9b9-df6266aacd47/volumes" Jan 22 17:02:27 crc kubenswrapper[4704]: I0122 17:02:27.926337 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:27 crc kubenswrapper[4704]: W0122 17:02:27.937348 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2be33c17_af62_4139_a650_e2257ae6ef3e.slice/crio-dd1e5596c4d0d1304b99aa633e98c84498dc56508cb3df5c31cd8c684b48df09 WatchSource:0}: Error finding container dd1e5596c4d0d1304b99aa633e98c84498dc56508cb3df5c31cd8c684b48df09: Status 404 returned error can't find the container with id dd1e5596c4d0d1304b99aa633e98c84498dc56508cb3df5c31cd8c684b48df09 Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.056324 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerStarted","Data":"dd1e5596c4d0d1304b99aa633e98c84498dc56508cb3df5c31cd8c684b48df09"} Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.716358 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.718309 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.755355 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.862725 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.862776 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c4fe9-cf2c-4086-a801-f02c13d32b82-logs\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.862945 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.863031 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htc28\" (UniqueName: \"kubernetes.io/projected/313c4fe9-cf2c-4086-a801-f02c13d32b82-kube-api-access-htc28\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.863105 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.863152 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.965078 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.965443 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htc28\" (UniqueName: \"kubernetes.io/projected/313c4fe9-cf2c-4086-a801-f02c13d32b82-kube-api-access-htc28\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.965493 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.965518 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.965557 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.965581 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c4fe9-cf2c-4086-a801-f02c13d32b82-logs\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.966015 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c4fe9-cf2c-4086-a801-f02c13d32b82-logs\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.970757 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.970767 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.971301 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.971496 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:28 crc kubenswrapper[4704]: I0122 17:02:28.988655 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htc28\" (UniqueName: \"kubernetes.io/projected/313c4fe9-cf2c-4086-a801-f02c13d32b82-kube-api-access-htc28\") pod \"watcher-kuttl-api-2\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:29 crc kubenswrapper[4704]: I0122 17:02:29.057601 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:29 crc kubenswrapper[4704]: I0122 17:02:29.066022 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerStarted","Data":"da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50"} Jan 22 17:02:29 crc kubenswrapper[4704]: I0122 17:02:29.513886 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 17:02:29 crc kubenswrapper[4704]: W0122 17:02:29.516284 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod313c4fe9_cf2c_4086_a801_f02c13d32b82.slice/crio-e3f74be40d046ef30e8ce027aa8b2a76efc3aeec18b17d292b81dc04b1d3dd14 WatchSource:0}: Error finding container e3f74be40d046ef30e8ce027aa8b2a76efc3aeec18b17d292b81dc04b1d3dd14: Status 404 returned error can't find the container with id e3f74be40d046ef30e8ce027aa8b2a76efc3aeec18b17d292b81dc04b1d3dd14 Jan 22 17:02:30 crc kubenswrapper[4704]: I0122 17:02:30.074120 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"313c4fe9-cf2c-4086-a801-f02c13d32b82","Type":"ContainerStarted","Data":"8010e778221d2a69c5f8562d412362c00b7eff1e97dcb1808218029b74befaf6"} Jan 22 17:02:30 crc kubenswrapper[4704]: I0122 17:02:30.075246 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:30 crc kubenswrapper[4704]: I0122 17:02:30.075316 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"313c4fe9-cf2c-4086-a801-f02c13d32b82","Type":"ContainerStarted","Data":"4962885b9cd920003bddc85344b6d0d8664c546a23a46103a7053c70a1480d74"} Jan 22 17:02:30 crc kubenswrapper[4704]: I0122 17:02:30.075373 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"313c4fe9-cf2c-4086-a801-f02c13d32b82","Type":"ContainerStarted","Data":"e3f74be40d046ef30e8ce027aa8b2a76efc3aeec18b17d292b81dc04b1d3dd14"} Jan 22 17:02:30 crc kubenswrapper[4704]: I0122 17:02:30.075885 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.212:9322/\": dial tcp 10.217.0.212:9322: connect: connection refused" Jan 22 17:02:30 crc kubenswrapper[4704]: I0122 17:02:30.076611 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerStarted","Data":"cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb"} Jan 22 17:02:30 crc kubenswrapper[4704]: I0122 17:02:30.076714 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerStarted","Data":"20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8"} Jan 22 17:02:32 crc kubenswrapper[4704]: I0122 17:02:32.099659 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerStarted","Data":"d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c"} Jan 22 17:02:32 crc kubenswrapper[4704]: I0122 17:02:32.100328 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:32 crc kubenswrapper[4704]: I0122 17:02:32.143070 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=4.143044184 podStartE2EDuration="4.143044184s" podCreationTimestamp="2026-01-22 17:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:30.098253681 +0000 UTC m=+2042.742800381" watchObservedRunningTime="2026-01-22 17:02:32.143044184 +0000 UTC m=+2044.787590924" Jan 22 17:02:32 crc kubenswrapper[4704]: I0122 17:02:32.149329 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9552020319999999 podStartE2EDuration="5.149308981s" podCreationTimestamp="2026-01-22 17:02:27 +0000 UTC" firstStartedPulling="2026-01-22 17:02:27.939097909 +0000 UTC m=+2040.583644609" lastFinishedPulling="2026-01-22 17:02:31.133204848 +0000 UTC m=+2043.777751558" observedRunningTime="2026-01-22 17:02:32.126521736 +0000 UTC m=+2044.771068446" watchObservedRunningTime="2026-01-22 17:02:32.149308981 +0000 UTC m=+2044.793855721" Jan 22 17:02:33 crc kubenswrapper[4704]: I0122 17:02:33.544065 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:34 crc kubenswrapper[4704]: I0122 17:02:34.058730 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:39 crc kubenswrapper[4704]: I0122 17:02:39.058129 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:39 crc kubenswrapper[4704]: I0122 17:02:39.062202 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:39 crc kubenswrapper[4704]: I0122 17:02:39.165213 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:40 crc kubenswrapper[4704]: I0122 17:02:40.274719 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 17:02:40 crc kubenswrapper[4704]: I0122 17:02:40.284187 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:02:40 crc kubenswrapper[4704]: I0122 17:02:40.284406 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-kuttl-api-log" containerID="cri-o://59a9b69c09b8a35d777063ddb087f9ccd0ec4f0f87142fb129e724080190592a" gracePeriod=30 Jan 22 17:02:40 crc kubenswrapper[4704]: I0122 17:02:40.284748 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-api" containerID="cri-o://9bb4e287e9121ddfe9b035fe020627e15e58ab6cf533ce6ec9e1f98eed37c52f" gracePeriod=30 Jan 22 17:02:40 crc kubenswrapper[4704]: I0122 17:02:40.761734 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.208:9322/\": read tcp 10.217.0.2:40454->10.217.0.208:9322: read: connection reset by peer" Jan 22 17:02:40 crc kubenswrapper[4704]: I0122 17:02:40.761890 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.208:9322/\": read tcp 10.217.0.2:40462->10.217.0.208:9322: read: connection reset by peer" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.178167 4704 generic.go:334] "Generic (PLEG): container finished" podID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerID="9bb4e287e9121ddfe9b035fe020627e15e58ab6cf533ce6ec9e1f98eed37c52f" exitCode=0 Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.178523 4704 generic.go:334] "Generic (PLEG): container finished" podID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerID="59a9b69c09b8a35d777063ddb087f9ccd0ec4f0f87142fb129e724080190592a" exitCode=143 Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.178728 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-kuttl-api-log" containerID="cri-o://4962885b9cd920003bddc85344b6d0d8664c546a23a46103a7053c70a1480d74" gracePeriod=30 Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.178277 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15a44497-9095-48b7-a2cb-958c6445a2ca","Type":"ContainerDied","Data":"9bb4e287e9121ddfe9b035fe020627e15e58ab6cf533ce6ec9e1f98eed37c52f"} Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.178869 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15a44497-9095-48b7-a2cb-958c6445a2ca","Type":"ContainerDied","Data":"59a9b69c09b8a35d777063ddb087f9ccd0ec4f0f87142fb129e724080190592a"} Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.178888 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15a44497-9095-48b7-a2cb-958c6445a2ca","Type":"ContainerDied","Data":"4de36e3d203d8cc028a56c2e9e48df1285a2706aea33efceb9dea119d78e250e"} Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.178902 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4de36e3d203d8cc028a56c2e9e48df1285a2706aea33efceb9dea119d78e250e" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.179213 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-api" containerID="cri-o://8010e778221d2a69c5f8562d412362c00b7eff1e97dcb1808218029b74befaf6" gracePeriod=30 Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.214346 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.259558 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-custom-prometheus-ca\") pod \"15a44497-9095-48b7-a2cb-958c6445a2ca\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.259701 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-cert-memcached-mtls\") pod \"15a44497-9095-48b7-a2cb-958c6445a2ca\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.259813 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vm94\" (UniqueName: \"kubernetes.io/projected/15a44497-9095-48b7-a2cb-958c6445a2ca-kube-api-access-8vm94\") pod \"15a44497-9095-48b7-a2cb-958c6445a2ca\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.259865 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-config-data\") pod \"15a44497-9095-48b7-a2cb-958c6445a2ca\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.259995 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-combined-ca-bundle\") pod \"15a44497-9095-48b7-a2cb-958c6445a2ca\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.260068 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a44497-9095-48b7-a2cb-958c6445a2ca-logs\") pod \"15a44497-9095-48b7-a2cb-958c6445a2ca\" (UID: \"15a44497-9095-48b7-a2cb-958c6445a2ca\") " Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.260511 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a44497-9095-48b7-a2cb-958c6445a2ca-logs" (OuterVolumeSpecName: "logs") pod "15a44497-9095-48b7-a2cb-958c6445a2ca" (UID: "15a44497-9095-48b7-a2cb-958c6445a2ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.291062 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a44497-9095-48b7-a2cb-958c6445a2ca-kube-api-access-8vm94" (OuterVolumeSpecName: "kube-api-access-8vm94") pod "15a44497-9095-48b7-a2cb-958c6445a2ca" (UID: "15a44497-9095-48b7-a2cb-958c6445a2ca"). InnerVolumeSpecName "kube-api-access-8vm94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.296092 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "15a44497-9095-48b7-a2cb-958c6445a2ca" (UID: "15a44497-9095-48b7-a2cb-958c6445a2ca"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.304180 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15a44497-9095-48b7-a2cb-958c6445a2ca" (UID: "15a44497-9095-48b7-a2cb-958c6445a2ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.338292 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-config-data" (OuterVolumeSpecName: "config-data") pod "15a44497-9095-48b7-a2cb-958c6445a2ca" (UID: "15a44497-9095-48b7-a2cb-958c6445a2ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.363217 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vm94\" (UniqueName: \"kubernetes.io/projected/15a44497-9095-48b7-a2cb-958c6445a2ca-kube-api-access-8vm94\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.363252 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.363265 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.363274 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a44497-9095-48b7-a2cb-958c6445a2ca-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.363282 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.371715 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "15a44497-9095-48b7-a2cb-958c6445a2ca" (UID: "15a44497-9095-48b7-a2cb-958c6445a2ca"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:41 crc kubenswrapper[4704]: I0122 17:02:41.465152 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15a44497-9095-48b7-a2cb-958c6445a2ca-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.188022 4704 generic.go:334] "Generic (PLEG): container finished" podID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerID="8010e778221d2a69c5f8562d412362c00b7eff1e97dcb1808218029b74befaf6" exitCode=0 Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.188049 4704 generic.go:334] "Generic (PLEG): container finished" podID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerID="4962885b9cd920003bddc85344b6d0d8664c546a23a46103a7053c70a1480d74" exitCode=143 Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.188129 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.188118 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"313c4fe9-cf2c-4086-a801-f02c13d32b82","Type":"ContainerDied","Data":"8010e778221d2a69c5f8562d412362c00b7eff1e97dcb1808218029b74befaf6"} Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.188176 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"313c4fe9-cf2c-4086-a801-f02c13d32b82","Type":"ContainerDied","Data":"4962885b9cd920003bddc85344b6d0d8664c546a23a46103a7053c70a1480d74"} Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.244383 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.250761 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.940162 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.985663 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-combined-ca-bundle\") pod \"313c4fe9-cf2c-4086-a801-f02c13d32b82\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.985813 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-custom-prometheus-ca\") pod \"313c4fe9-cf2c-4086-a801-f02c13d32b82\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.985879 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-config-data\") pod \"313c4fe9-cf2c-4086-a801-f02c13d32b82\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.985998 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htc28\" (UniqueName: \"kubernetes.io/projected/313c4fe9-cf2c-4086-a801-f02c13d32b82-kube-api-access-htc28\") pod \"313c4fe9-cf2c-4086-a801-f02c13d32b82\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.986067 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c4fe9-cf2c-4086-a801-f02c13d32b82-logs\") pod \"313c4fe9-cf2c-4086-a801-f02c13d32b82\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.986121 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-cert-memcached-mtls\") pod \"313c4fe9-cf2c-4086-a801-f02c13d32b82\" (UID: \"313c4fe9-cf2c-4086-a801-f02c13d32b82\") " Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.986635 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/313c4fe9-cf2c-4086-a801-f02c13d32b82-logs" (OuterVolumeSpecName: "logs") pod "313c4fe9-cf2c-4086-a801-f02c13d32b82" (UID: "313c4fe9-cf2c-4086-a801-f02c13d32b82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:42 crc kubenswrapper[4704]: I0122 17:02:42.994960 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313c4fe9-cf2c-4086-a801-f02c13d32b82-kube-api-access-htc28" (OuterVolumeSpecName: "kube-api-access-htc28") pod "313c4fe9-cf2c-4086-a801-f02c13d32b82" (UID: "313c4fe9-cf2c-4086-a801-f02c13d32b82"). InnerVolumeSpecName "kube-api-access-htc28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.013106 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "313c4fe9-cf2c-4086-a801-f02c13d32b82" (UID: "313c4fe9-cf2c-4086-a801-f02c13d32b82"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.022165 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "313c4fe9-cf2c-4086-a801-f02c13d32b82" (UID: "313c4fe9-cf2c-4086-a801-f02c13d32b82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.041979 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-config-data" (OuterVolumeSpecName: "config-data") pod "313c4fe9-cf2c-4086-a801-f02c13d32b82" (UID: "313c4fe9-cf2c-4086-a801-f02c13d32b82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.063403 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "313c4fe9-cf2c-4086-a801-f02c13d32b82" (UID: "313c4fe9-cf2c-4086-a801-f02c13d32b82"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.089196 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htc28\" (UniqueName: \"kubernetes.io/projected/313c4fe9-cf2c-4086-a801-f02c13d32b82-kube-api-access-htc28\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.089232 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313c4fe9-cf2c-4086-a801-f02c13d32b82-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.089248 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.089259 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.089270 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.089281 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313c4fe9-cf2c-4086-a801-f02c13d32b82-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.198884 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"313c4fe9-cf2c-4086-a801-f02c13d32b82","Type":"ContainerDied","Data":"e3f74be40d046ef30e8ce027aa8b2a76efc3aeec18b17d292b81dc04b1d3dd14"} Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.198973 4704 scope.go:117] "RemoveContainer" containerID="8010e778221d2a69c5f8562d412362c00b7eff1e97dcb1808218029b74befaf6" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.199002 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.229629 4704 scope.go:117] "RemoveContainer" containerID="4962885b9cd920003bddc85344b6d0d8664c546a23a46103a7053c70a1480d74" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.248192 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.260974 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.630786 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.631036 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-kuttl-api-log" containerID="cri-o://6dd1eac675f0cce75c2248f35fa22ec364b9ab5562a6bd3051e7879b816c0925" gracePeriod=30 Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.631108 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-api" containerID="cri-o://bf18264520e0c37a17d9afb6f629998365b35242876df7dcfe3fed2a12ece7a6" gracePeriod=30 Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.641782 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" path="/var/lib/kubelet/pods/15a44497-9095-48b7-a2cb-958c6445a2ca/volumes" Jan 22 17:02:43 crc kubenswrapper[4704]: I0122 17:02:43.642596 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" path="/var/lib/kubelet/pods/313c4fe9-cf2c-4086-a801-f02c13d32b82/volumes" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.210921 4704 generic.go:334] "Generic (PLEG): container finished" podID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerID="bf18264520e0c37a17d9afb6f629998365b35242876df7dcfe3fed2a12ece7a6" exitCode=0 Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.210961 4704 generic.go:334] "Generic (PLEG): container finished" podID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerID="6dd1eac675f0cce75c2248f35fa22ec364b9ab5562a6bd3051e7879b816c0925" exitCode=143 Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.210985 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b7580e9-29bf-40fa-9e68-af6b0c56d644","Type":"ContainerDied","Data":"bf18264520e0c37a17d9afb6f629998365b35242876df7dcfe3fed2a12ece7a6"} Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.211016 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b7580e9-29bf-40fa-9e68-af6b0c56d644","Type":"ContainerDied","Data":"6dd1eac675f0cce75c2248f35fa22ec364b9ab5562a6bd3051e7879b816c0925"} Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.536622 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.612295 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7580e9-29bf-40fa-9e68-af6b0c56d644-logs\") pod \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.612372 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-config-data\") pod \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.612426 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-cert-memcached-mtls\") pod \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.612457 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b87l7\" (UniqueName: \"kubernetes.io/projected/4b7580e9-29bf-40fa-9e68-af6b0c56d644-kube-api-access-b87l7\") pod \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.612539 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-custom-prometheus-ca\") pod \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.612608 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-combined-ca-bundle\") pod \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\" (UID: \"4b7580e9-29bf-40fa-9e68-af6b0c56d644\") " Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.612768 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b7580e9-29bf-40fa-9e68-af6b0c56d644-logs" (OuterVolumeSpecName: "logs") pod "4b7580e9-29bf-40fa-9e68-af6b0c56d644" (UID: "4b7580e9-29bf-40fa-9e68-af6b0c56d644"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.613450 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b7580e9-29bf-40fa-9e68-af6b0c56d644-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.624354 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7580e9-29bf-40fa-9e68-af6b0c56d644-kube-api-access-b87l7" (OuterVolumeSpecName: "kube-api-access-b87l7") pod "4b7580e9-29bf-40fa-9e68-af6b0c56d644" (UID: "4b7580e9-29bf-40fa-9e68-af6b0c56d644"). InnerVolumeSpecName "kube-api-access-b87l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.635888 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4b7580e9-29bf-40fa-9e68-af6b0c56d644" (UID: "4b7580e9-29bf-40fa-9e68-af6b0c56d644"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.649993 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b7580e9-29bf-40fa-9e68-af6b0c56d644" (UID: "4b7580e9-29bf-40fa-9e68-af6b0c56d644"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.687071 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-config-data" (OuterVolumeSpecName: "config-data") pod "4b7580e9-29bf-40fa-9e68-af6b0c56d644" (UID: "4b7580e9-29bf-40fa-9e68-af6b0c56d644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.715969 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.716037 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.715969 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "4b7580e9-29bf-40fa-9e68-af6b0c56d644" (UID: "4b7580e9-29bf-40fa-9e68-af6b0c56d644"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.716055 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b87l7\" (UniqueName: \"kubernetes.io/projected/4b7580e9-29bf-40fa-9e68-af6b0c56d644-kube-api-access-b87l7\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.716123 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:44 crc kubenswrapper[4704]: I0122 17:02:44.817921 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b7580e9-29bf-40fa-9e68-af6b0c56d644-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.013155 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr"] Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.023922 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jt2gr"] Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.071269 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.071531 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dc49d8ed-4894-434a-9d50-46836567ff38" containerName="watcher-applier" containerID="cri-o://9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" gracePeriod=30 Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.155830 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher8f85-account-delete-ck2wh"] Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.156421 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156437 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.156454 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156460 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.156478 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156484 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.156494 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156499 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.156513 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156520 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.156526 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156532 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156674 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156685 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156694 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156705 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a44497-9095-48b7-a2cb-958c6445a2ca" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156716 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" containerName="watcher-api" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.156729 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="313c4fe9-cf2c-4086-a801-f02c13d32b82" containerName="watcher-kuttl-api-log" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.157254 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.181548 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8f85-account-delete-ck2wh"] Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.221202 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.221504 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="f001d4a9-ce4d-49fb-841e-0b51831c4ae2" containerName="watcher-decision-engine" containerID="cri-o://49018b88851f8556a5d3116ef4c09aeb76bc8da79457c4b4a0be79d34d1ba8ea" gracePeriod=30 Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.236312 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4b7580e9-29bf-40fa-9e68-af6b0c56d644","Type":"ContainerDied","Data":"fa6ceae0928c072a29fb3af1524a796ffbd06f1a4e783b0b0b9514fb33a79331"} Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.236360 4704 scope.go:117] "RemoveContainer" containerID="bf18264520e0c37a17d9afb6f629998365b35242876df7dcfe3fed2a12ece7a6" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.236482 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.256590 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgs8\" (UniqueName: \"kubernetes.io/projected/07cbb45a-02fb-4193-89b7-b54bc760af60-kube-api-access-5lgs8\") pod \"watcher8f85-account-delete-ck2wh\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.256628 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cbb45a-02fb-4193-89b7-b54bc760af60-operator-scripts\") pod \"watcher8f85-account-delete-ck2wh\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.285994 4704 scope.go:117] "RemoveContainer" containerID="6dd1eac675f0cce75c2248f35fa22ec364b9ab5562a6bd3051e7879b816c0925" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.288420 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.300049 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.358085 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgs8\" (UniqueName: \"kubernetes.io/projected/07cbb45a-02fb-4193-89b7-b54bc760af60-kube-api-access-5lgs8\") pod \"watcher8f85-account-delete-ck2wh\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.358134 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cbb45a-02fb-4193-89b7-b54bc760af60-operator-scripts\") pod \"watcher8f85-account-delete-ck2wh\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.358924 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cbb45a-02fb-4193-89b7-b54bc760af60-operator-scripts\") pod \"watcher8f85-account-delete-ck2wh\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.375843 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgs8\" (UniqueName: \"kubernetes.io/projected/07cbb45a-02fb-4193-89b7-b54bc760af60-kube-api-access-5lgs8\") pod \"watcher8f85-account-delete-ck2wh\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.460987 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.464454 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.465858 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:02:45 crc kubenswrapper[4704]: E0122 17:02:45.465909 4704 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dc49d8ed-4894-434a-9d50-46836567ff38" containerName="watcher-applier" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.494787 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.644584 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b7580e9-29bf-40fa-9e68-af6b0c56d644" path="/var/lib/kubelet/pods/4b7580e9-29bf-40fa-9e68-af6b0c56d644/volumes" Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.646008 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f50820d-ea55-40a8-8d2f-f03dd95edf2a" path="/var/lib/kubelet/pods/6f50820d-ea55-40a8-8d2f-f03dd95edf2a/volumes" Jan 22 17:02:45 crc kubenswrapper[4704]: W0122 17:02:45.966396 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07cbb45a_02fb_4193_89b7_b54bc760af60.slice/crio-d957576fb17bacbfc1ea293f87b3dbdd84ad23da4590609089c5a2c24b560663 WatchSource:0}: Error finding container d957576fb17bacbfc1ea293f87b3dbdd84ad23da4590609089c5a2c24b560663: Status 404 returned error can't find the container with id d957576fb17bacbfc1ea293f87b3dbdd84ad23da4590609089c5a2c24b560663 Jan 22 17:02:45 crc kubenswrapper[4704]: I0122 17:02:45.975544 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8f85-account-delete-ck2wh"] Jan 22 17:02:46 crc kubenswrapper[4704]: I0122 17:02:46.244265 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" event={"ID":"07cbb45a-02fb-4193-89b7-b54bc760af60","Type":"ContainerStarted","Data":"adcc59c8f67189ff36ca8240aa1c33a4ac9c87ab1dfedf763a56668e9367f564"} Jan 22 17:02:46 crc kubenswrapper[4704]: I0122 17:02:46.244587 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" event={"ID":"07cbb45a-02fb-4193-89b7-b54bc760af60","Type":"ContainerStarted","Data":"d957576fb17bacbfc1ea293f87b3dbdd84ad23da4590609089c5a2c24b560663"} Jan 22 17:02:46 crc kubenswrapper[4704]: I0122 17:02:46.261995 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" podStartSLOduration=1.261975463 podStartE2EDuration="1.261975463s" podCreationTimestamp="2026-01-22 17:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:46.256269691 +0000 UTC m=+2058.900816401" watchObservedRunningTime="2026-01-22 17:02:46.261975463 +0000 UTC m=+2058.906522163" Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.255978 4704 generic.go:334] "Generic (PLEG): container finished" podID="07cbb45a-02fb-4193-89b7-b54bc760af60" containerID="adcc59c8f67189ff36ca8240aa1c33a4ac9c87ab1dfedf763a56668e9367f564" exitCode=0 Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.256019 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" event={"ID":"07cbb45a-02fb-4193-89b7-b54bc760af60","Type":"ContainerDied","Data":"adcc59c8f67189ff36ca8240aa1c33a4ac9c87ab1dfedf763a56668e9367f564"} Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.793402 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.794065 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-central-agent" containerID="cri-o://da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50" gracePeriod=30 Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.794142 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-notification-agent" containerID="cri-o://20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8" gracePeriod=30 Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.794183 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="proxy-httpd" containerID="cri-o://d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c" gracePeriod=30 Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.794137 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="sg-core" containerID="cri-o://cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb" gracePeriod=30 Jan 22 17:02:47 crc kubenswrapper[4704]: I0122 17:02:47.808214 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.270123 4704 generic.go:334] "Generic (PLEG): container finished" podID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerID="d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c" exitCode=0 Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.270153 4704 generic.go:334] "Generic (PLEG): container finished" podID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerID="cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb" exitCode=2 Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.270160 4704 generic.go:334] "Generic (PLEG): container finished" podID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerID="da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50" exitCode=0 Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.270299 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerDied","Data":"d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c"} Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.270325 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerDied","Data":"cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb"} Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.270336 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerDied","Data":"da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50"} Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.632973 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.725024 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cbb45a-02fb-4193-89b7-b54bc760af60-operator-scripts\") pod \"07cbb45a-02fb-4193-89b7-b54bc760af60\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.725105 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lgs8\" (UniqueName: \"kubernetes.io/projected/07cbb45a-02fb-4193-89b7-b54bc760af60-kube-api-access-5lgs8\") pod \"07cbb45a-02fb-4193-89b7-b54bc760af60\" (UID: \"07cbb45a-02fb-4193-89b7-b54bc760af60\") " Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.726142 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07cbb45a-02fb-4193-89b7-b54bc760af60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07cbb45a-02fb-4193-89b7-b54bc760af60" (UID: "07cbb45a-02fb-4193-89b7-b54bc760af60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.726806 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07cbb45a-02fb-4193-89b7-b54bc760af60-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.730764 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07cbb45a-02fb-4193-89b7-b54bc760af60-kube-api-access-5lgs8" (OuterVolumeSpecName: "kube-api-access-5lgs8") pod "07cbb45a-02fb-4193-89b7-b54bc760af60" (UID: "07cbb45a-02fb-4193-89b7-b54bc760af60"). InnerVolumeSpecName "kube-api-access-5lgs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:48 crc kubenswrapper[4704]: I0122 17:02:48.828633 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lgs8\" (UniqueName: \"kubernetes.io/projected/07cbb45a-02fb-4193-89b7-b54bc760af60-kube-api-access-5lgs8\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.307350 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" event={"ID":"07cbb45a-02fb-4193-89b7-b54bc760af60","Type":"ContainerDied","Data":"d957576fb17bacbfc1ea293f87b3dbdd84ad23da4590609089c5a2c24b560663"} Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.307390 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d957576fb17bacbfc1ea293f87b3dbdd84ad23da4590609089c5a2c24b560663" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.307438 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8f85-account-delete-ck2wh" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.738044 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.842607 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-cert-memcached-mtls\") pod \"dc49d8ed-4894-434a-9d50-46836567ff38\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.842720 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc49d8ed-4894-434a-9d50-46836567ff38-logs\") pod \"dc49d8ed-4894-434a-9d50-46836567ff38\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.842815 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw8mf\" (UniqueName: \"kubernetes.io/projected/dc49d8ed-4894-434a-9d50-46836567ff38-kube-api-access-pw8mf\") pod \"dc49d8ed-4894-434a-9d50-46836567ff38\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.842856 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-combined-ca-bundle\") pod \"dc49d8ed-4894-434a-9d50-46836567ff38\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.842874 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-config-data\") pod \"dc49d8ed-4894-434a-9d50-46836567ff38\" (UID: \"dc49d8ed-4894-434a-9d50-46836567ff38\") " Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.843166 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc49d8ed-4894-434a-9d50-46836567ff38-logs" (OuterVolumeSpecName: "logs") pod "dc49d8ed-4894-434a-9d50-46836567ff38" (UID: "dc49d8ed-4894-434a-9d50-46836567ff38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.861581 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc49d8ed-4894-434a-9d50-46836567ff38-kube-api-access-pw8mf" (OuterVolumeSpecName: "kube-api-access-pw8mf") pod "dc49d8ed-4894-434a-9d50-46836567ff38" (UID: "dc49d8ed-4894-434a-9d50-46836567ff38"). InnerVolumeSpecName "kube-api-access-pw8mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.881151 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc49d8ed-4894-434a-9d50-46836567ff38" (UID: "dc49d8ed-4894-434a-9d50-46836567ff38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.890159 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-config-data" (OuterVolumeSpecName: "config-data") pod "dc49d8ed-4894-434a-9d50-46836567ff38" (UID: "dc49d8ed-4894-434a-9d50-46836567ff38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.927944 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "dc49d8ed-4894-434a-9d50-46836567ff38" (UID: "dc49d8ed-4894-434a-9d50-46836567ff38"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.944807 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.944839 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.944848 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dc49d8ed-4894-434a-9d50-46836567ff38-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.944857 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc49d8ed-4894-434a-9d50-46836567ff38-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:49 crc kubenswrapper[4704]: I0122 17:02:49.944865 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw8mf\" (UniqueName: \"kubernetes.io/projected/dc49d8ed-4894-434a-9d50-46836567ff38-kube-api-access-pw8mf\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.169006 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-54bdh"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.187104 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-54bdh"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.209053 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.226713 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher8f85-account-delete-ck2wh"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.238777 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-8f85-account-create-update-rdqn9"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.247706 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher8f85-account-delete-ck2wh"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.331726 4704 generic.go:334] "Generic (PLEG): container finished" podID="f001d4a9-ce4d-49fb-841e-0b51831c4ae2" containerID="49018b88851f8556a5d3116ef4c09aeb76bc8da79457c4b4a0be79d34d1ba8ea" exitCode=0 Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.331813 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f001d4a9-ce4d-49fb-841e-0b51831c4ae2","Type":"ContainerDied","Data":"49018b88851f8556a5d3116ef4c09aeb76bc8da79457c4b4a0be79d34d1ba8ea"} Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.338326 4704 generic.go:334] "Generic (PLEG): container finished" podID="dc49d8ed-4894-434a-9d50-46836567ff38" containerID="9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" exitCode=0 Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.338399 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc49d8ed-4894-434a-9d50-46836567ff38","Type":"ContainerDied","Data":"9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3"} Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.338434 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc49d8ed-4894-434a-9d50-46836567ff38","Type":"ContainerDied","Data":"b6aa071d90dfe2783ddbde141177f4acf561f67dd995b5099bc219db4b590093"} Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.338455 4704 scope.go:117] "RemoveContainer" containerID="9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.338596 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.373609 4704 scope.go:117] "RemoveContainer" containerID="9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" Jan 22 17:02:50 crc kubenswrapper[4704]: E0122 17:02:50.374076 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3\": container with ID starting with 9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3 not found: ID does not exist" containerID="9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.374101 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3"} err="failed to get container status \"9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3\": rpc error: code = NotFound desc = could not find container \"9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3\": container with ID starting with 9e1462e0ae51823267f3e5f823ebebd3d6a9581372df92b7eb677fca79c176f3 not found: ID does not exist" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.413158 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.431172 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.510387 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.657726 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-combined-ca-bundle\") pod \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.657773 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-config-data\") pod \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.657841 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-logs\") pod \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.657883 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-custom-prometheus-ca\") pod \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.657977 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8lgx\" (UniqueName: \"kubernetes.io/projected/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-kube-api-access-q8lgx\") pod \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.658004 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-cert-memcached-mtls\") pod \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\" (UID: \"f001d4a9-ce4d-49fb-841e-0b51831c4ae2\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.659986 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-logs" (OuterVolumeSpecName: "logs") pod "f001d4a9-ce4d-49fb-841e-0b51831c4ae2" (UID: "f001d4a9-ce4d-49fb-841e-0b51831c4ae2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.670393 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-kube-api-access-q8lgx" (OuterVolumeSpecName: "kube-api-access-q8lgx") pod "f001d4a9-ce4d-49fb-841e-0b51831c4ae2" (UID: "f001d4a9-ce4d-49fb-841e-0b51831c4ae2"). InnerVolumeSpecName "kube-api-access-q8lgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.680880 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f001d4a9-ce4d-49fb-841e-0b51831c4ae2" (UID: "f001d4a9-ce4d-49fb-841e-0b51831c4ae2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.688076 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f001d4a9-ce4d-49fb-841e-0b51831c4ae2" (UID: "f001d4a9-ce4d-49fb-841e-0b51831c4ae2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.703586 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-config-data" (OuterVolumeSpecName: "config-data") pod "f001d4a9-ce4d-49fb-841e-0b51831c4ae2" (UID: "f001d4a9-ce4d-49fb-841e-0b51831c4ae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.739934 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f001d4a9-ce4d-49fb-841e-0b51831c4ae2" (UID: "f001d4a9-ce4d-49fb-841e-0b51831c4ae2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.745891 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.760203 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.760244 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.760257 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8lgx\" (UniqueName: \"kubernetes.io/projected/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-kube-api-access-q8lgx\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.760268 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.760279 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.760295 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001d4a9-ce4d-49fb-841e-0b51831c4ae2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861474 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-config-data\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861524 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-scripts\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861576 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-log-httpd\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861678 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-combined-ca-bundle\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861697 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-sg-core-conf-yaml\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861761 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bbvt\" (UniqueName: \"kubernetes.io/projected/2be33c17-af62-4139-a650-e2257ae6ef3e-kube-api-access-2bbvt\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861778 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-ceilometer-tls-certs\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.861817 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-run-httpd\") pod \"2be33c17-af62-4139-a650-e2257ae6ef3e\" (UID: \"2be33c17-af62-4139-a650-e2257ae6ef3e\") " Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.862404 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.866571 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.874133 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be33c17-af62-4139-a650-e2257ae6ef3e-kube-api-access-2bbvt" (OuterVolumeSpecName: "kube-api-access-2bbvt") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "kube-api-access-2bbvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.874567 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-scripts" (OuterVolumeSpecName: "scripts") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.904984 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.950833 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.952105 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.963904 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.964164 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.964240 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bbvt\" (UniqueName: \"kubernetes.io/projected/2be33c17-af62-4139-a650-e2257ae6ef3e-kube-api-access-2bbvt\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.964312 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.964380 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.964445 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.964551 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2be33c17-af62-4139-a650-e2257ae6ef3e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:50 crc kubenswrapper[4704]: I0122 17:02:50.980590 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-config-data" (OuterVolumeSpecName: "config-data") pod "2be33c17-af62-4139-a650-e2257ae6ef3e" (UID: "2be33c17-af62-4139-a650-e2257ae6ef3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.066162 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be33c17-af62-4139-a650-e2257ae6ef3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.348091 4704 generic.go:334] "Generic (PLEG): container finished" podID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerID="20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8" exitCode=0 Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.348139 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerDied","Data":"20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8"} Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.348162 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2be33c17-af62-4139-a650-e2257ae6ef3e","Type":"ContainerDied","Data":"dd1e5596c4d0d1304b99aa633e98c84498dc56508cb3df5c31cd8c684b48df09"} Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.348179 4704 scope.go:117] "RemoveContainer" containerID="d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.348278 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.356582 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f001d4a9-ce4d-49fb-841e-0b51831c4ae2","Type":"ContainerDied","Data":"7b4fd5863464b5929239b877bf35fd7ad354ec5296851224d65ca3d89789bd09"} Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.356624 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.382247 4704 scope.go:117] "RemoveContainer" containerID="cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.387834 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.396330 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.417934 4704 scope.go:117] "RemoveContainer" containerID="20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.441677 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.445965 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.446277 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f001d4a9-ce4d-49fb-841e-0b51831c4ae2" containerName="watcher-decision-engine" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446294 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f001d4a9-ce4d-49fb-841e-0b51831c4ae2" containerName="watcher-decision-engine" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.446305 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-notification-agent" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446312 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-notification-agent" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.446338 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="sg-core" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446345 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="sg-core" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.446356 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-central-agent" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446362 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-central-agent" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.446370 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc49d8ed-4894-434a-9d50-46836567ff38" containerName="watcher-applier" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446376 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc49d8ed-4894-434a-9d50-46836567ff38" containerName="watcher-applier" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.446385 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07cbb45a-02fb-4193-89b7-b54bc760af60" containerName="mariadb-account-delete" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446391 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="07cbb45a-02fb-4193-89b7-b54bc760af60" containerName="mariadb-account-delete" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.446401 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="proxy-httpd" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446407 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="proxy-httpd" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446555 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="proxy-httpd" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446569 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="sg-core" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446582 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-central-agent" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446592 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc49d8ed-4894-434a-9d50-46836567ff38" containerName="watcher-applier" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446599 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" containerName="ceilometer-notification-agent" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446610 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f001d4a9-ce4d-49fb-841e-0b51831c4ae2" containerName="watcher-decision-engine" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.446623 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="07cbb45a-02fb-4193-89b7-b54bc760af60" containerName="mariadb-account-delete" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.448133 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.451850 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.452018 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.456607 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.463068 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.471515 4704 scope.go:117] "RemoveContainer" containerID="da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.481984 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.504037 4704 scope.go:117] "RemoveContainer" containerID="d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.504665 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c\": container with ID starting with d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c not found: ID does not exist" containerID="d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.504700 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c"} err="failed to get container status \"d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c\": rpc error: code = NotFound desc = could not find container \"d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c\": container with ID starting with d068a44d3e28ce60e712e0749983bce82493cda33952d327ee256880ab175d8c not found: ID does not exist" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.504723 4704 scope.go:117] "RemoveContainer" containerID="cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.505176 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb\": container with ID starting with cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb not found: ID does not exist" containerID="cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.505223 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb"} err="failed to get container status \"cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb\": rpc error: code = NotFound desc = could not find container \"cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb\": container with ID starting with cfbcbafeb617a42edfbae220bc2dd2833e38c462d171af59f10df7fa490b7afb not found: ID does not exist" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.505250 4704 scope.go:117] "RemoveContainer" containerID="20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.506187 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8\": container with ID starting with 20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8 not found: ID does not exist" containerID="20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.506213 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8"} err="failed to get container status \"20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8\": rpc error: code = NotFound desc = could not find container \"20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8\": container with ID starting with 20aca8cefd5fa4b28d6879d74c78a56bfae2df953e68b71108f11efc4055eea8 not found: ID does not exist" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.506229 4704 scope.go:117] "RemoveContainer" containerID="da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50" Jan 22 17:02:51 crc kubenswrapper[4704]: E0122 17:02:51.506751 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50\": container with ID starting with da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50 not found: ID does not exist" containerID="da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.506802 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50"} err="failed to get container status \"da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50\": rpc error: code = NotFound desc = could not find container \"da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50\": container with ID starting with da73eb651db1a2c837818b7010e5ce75ccfdf86e0e24d48cbb2aa73b47d16f50 not found: ID does not exist" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.506828 4704 scope.go:117] "RemoveContainer" containerID="49018b88851f8556a5d3116ef4c09aeb76bc8da79457c4b4a0be79d34d1ba8ea" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.574968 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-config-data\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.575025 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-run-httpd\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.575063 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-scripts\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.575086 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkzw9\" (UniqueName: \"kubernetes.io/projected/806c3a9d-0a6b-4742-acb5-df18392221e9-kube-api-access-pkzw9\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.575126 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.575154 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.575193 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.575257 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-log-httpd\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.641490 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07cbb45a-02fb-4193-89b7-b54bc760af60" path="/var/lib/kubelet/pods/07cbb45a-02fb-4193-89b7-b54bc760af60/volumes" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.642170 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2be33c17-af62-4139-a650-e2257ae6ef3e" path="/var/lib/kubelet/pods/2be33c17-af62-4139-a650-e2257ae6ef3e/volumes" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.642940 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac38922-ed14-435c-8b0a-21fb1d6eb922" path="/var/lib/kubelet/pods/cac38922-ed14-435c-8b0a-21fb1d6eb922/volumes" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.643993 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d294965f-c653-4a77-8179-db182bf86a01" path="/var/lib/kubelet/pods/d294965f-c653-4a77-8179-db182bf86a01/volumes" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.644456 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc49d8ed-4894-434a-9d50-46836567ff38" path="/var/lib/kubelet/pods/dc49d8ed-4894-434a-9d50-46836567ff38/volumes" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.644948 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f001d4a9-ce4d-49fb-841e-0b51831c4ae2" path="/var/lib/kubelet/pods/f001d4a9-ce4d-49fb-841e-0b51831c4ae2/volumes" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676403 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-config-data\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676452 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-run-httpd\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676483 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-scripts\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676502 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkzw9\" (UniqueName: \"kubernetes.io/projected/806c3a9d-0a6b-4742-acb5-df18392221e9-kube-api-access-pkzw9\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676538 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676559 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676583 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.676608 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-log-httpd\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.677108 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-log-httpd\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.677365 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-run-httpd\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.681460 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-config-data\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.681612 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-scripts\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.682292 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.683465 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.692207 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.693396 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkzw9\" (UniqueName: \"kubernetes.io/projected/806c3a9d-0a6b-4742-acb5-df18392221e9-kube-api-access-pkzw9\") pod \"ceilometer-0\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:51 crc kubenswrapper[4704]: I0122 17:02:51.776043 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:52 crc kubenswrapper[4704]: I0122 17:02:52.244803 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:02:52 crc kubenswrapper[4704]: I0122 17:02:52.367647 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerStarted","Data":"56dd816eda77f0ba74e433acddbb76f7ed13829e11054405a38950f72b5de539"} Jan 22 17:02:53 crc kubenswrapper[4704]: I0122 17:02:53.381812 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerStarted","Data":"1d0e0b5328d429949a308967ecff9cba676de594c8f8d01f2328024d50cf081a"} Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.391722 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerStarted","Data":"6b383eb4359703034860b20fbc4703eb6f19ce0405a09f5e6278841f3c06020a"} Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.844606 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-wjskj"] Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.846205 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.866455 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wjskj"] Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.906747 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw"] Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.915003 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.919206 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.938898 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7305dd99-bb45-4b18-b6df-923aded1f77a-operator-scripts\") pod \"watcher-db-create-wjskj\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.939028 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl876\" (UniqueName: \"kubernetes.io/projected/7305dd99-bb45-4b18-b6df-923aded1f77a-kube-api-access-hl876\") pod \"watcher-db-create-wjskj\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:54 crc kubenswrapper[4704]: I0122 17:02:54.941178 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw"] Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.042972 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7305dd99-bb45-4b18-b6df-923aded1f77a-operator-scripts\") pod \"watcher-db-create-wjskj\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.043062 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7xbs\" (UniqueName: \"kubernetes.io/projected/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-kube-api-access-s7xbs\") pod \"watcher-d3cb-account-create-update-bt8dw\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.043126 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl876\" (UniqueName: \"kubernetes.io/projected/7305dd99-bb45-4b18-b6df-923aded1f77a-kube-api-access-hl876\") pod \"watcher-db-create-wjskj\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.043155 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-operator-scripts\") pod \"watcher-d3cb-account-create-update-bt8dw\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.044205 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7305dd99-bb45-4b18-b6df-923aded1f77a-operator-scripts\") pod \"watcher-db-create-wjskj\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.062384 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl876\" (UniqueName: \"kubernetes.io/projected/7305dd99-bb45-4b18-b6df-923aded1f77a-kube-api-access-hl876\") pod \"watcher-db-create-wjskj\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.145089 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7xbs\" (UniqueName: \"kubernetes.io/projected/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-kube-api-access-s7xbs\") pod \"watcher-d3cb-account-create-update-bt8dw\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.145235 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-operator-scripts\") pod \"watcher-d3cb-account-create-update-bt8dw\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.146996 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-operator-scripts\") pod \"watcher-d3cb-account-create-update-bt8dw\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.165268 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.173054 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7xbs\" (UniqueName: \"kubernetes.io/projected/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-kube-api-access-s7xbs\") pod \"watcher-d3cb-account-create-update-bt8dw\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.238631 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.465472 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerStarted","Data":"3e8386af4ee1a1f191aef148b7218903efa75f5b7b2fcbf7c271516253c3486a"} Jan 22 17:02:55 crc kubenswrapper[4704]: I0122 17:02:55.622099 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wjskj"] Jan 22 17:02:56 crc kubenswrapper[4704]: W0122 17:02:56.042689 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode513855f_00f7_4f6d_95f9_eb83a01a2e3c.slice/crio-6b3feaa4ea58ed92911910c7401246364760a9cb24751047d489cdffbec1aa94 WatchSource:0}: Error finding container 6b3feaa4ea58ed92911910c7401246364760a9cb24751047d489cdffbec1aa94: Status 404 returned error can't find the container with id 6b3feaa4ea58ed92911910c7401246364760a9cb24751047d489cdffbec1aa94 Jan 22 17:02:56 crc kubenswrapper[4704]: I0122 17:02:56.058616 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw"] Jan 22 17:02:56 crc kubenswrapper[4704]: I0122 17:02:56.476583 4704 generic.go:334] "Generic (PLEG): container finished" podID="7305dd99-bb45-4b18-b6df-923aded1f77a" containerID="c30161fc5567f2be54f3d1c7f9fff57e8583dc81d1abd60cabffdf5378436b58" exitCode=0 Jan 22 17:02:56 crc kubenswrapper[4704]: I0122 17:02:56.476663 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wjskj" event={"ID":"7305dd99-bb45-4b18-b6df-923aded1f77a","Type":"ContainerDied","Data":"c30161fc5567f2be54f3d1c7f9fff57e8583dc81d1abd60cabffdf5378436b58"} Jan 22 17:02:56 crc kubenswrapper[4704]: I0122 17:02:56.476919 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wjskj" event={"ID":"7305dd99-bb45-4b18-b6df-923aded1f77a","Type":"ContainerStarted","Data":"1952e442077ea2b0758a540e688b1ae00077064dede12bb8b2b96cd925addf99"} Jan 22 17:02:56 crc kubenswrapper[4704]: I0122 17:02:56.479683 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" event={"ID":"e513855f-00f7-4f6d-95f9-eb83a01a2e3c","Type":"ContainerStarted","Data":"c5a1b127eca5de1fad7d76d248ec7fc4a1bc076edf3b010c4389fed50f0c7703"} Jan 22 17:02:56 crc kubenswrapper[4704]: I0122 17:02:56.479707 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" event={"ID":"e513855f-00f7-4f6d-95f9-eb83a01a2e3c","Type":"ContainerStarted","Data":"6b3feaa4ea58ed92911910c7401246364760a9cb24751047d489cdffbec1aa94"} Jan 22 17:02:56 crc kubenswrapper[4704]: I0122 17:02:56.527464 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" podStartSLOduration=2.527435108 podStartE2EDuration="2.527435108s" podCreationTimestamp="2026-01-22 17:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:02:56.518534756 +0000 UTC m=+2069.163081456" watchObservedRunningTime="2026-01-22 17:02:56.527435108 +0000 UTC m=+2069.171981808" Jan 22 17:02:57 crc kubenswrapper[4704]: I0122 17:02:57.490569 4704 generic.go:334] "Generic (PLEG): container finished" podID="e513855f-00f7-4f6d-95f9-eb83a01a2e3c" containerID="c5a1b127eca5de1fad7d76d248ec7fc4a1bc076edf3b010c4389fed50f0c7703" exitCode=0 Jan 22 17:02:57 crc kubenswrapper[4704]: I0122 17:02:57.490664 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" event={"ID":"e513855f-00f7-4f6d-95f9-eb83a01a2e3c","Type":"ContainerDied","Data":"c5a1b127eca5de1fad7d76d248ec7fc4a1bc076edf3b010c4389fed50f0c7703"} Jan 22 17:02:57 crc kubenswrapper[4704]: I0122 17:02:57.495261 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerStarted","Data":"83f2f1ee85d835b91648abd275edbc076524c973ee8b5a2a421a89c3a7968cc5"} Jan 22 17:02:57 crc kubenswrapper[4704]: I0122 17:02:57.495417 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:02:57 crc kubenswrapper[4704]: I0122 17:02:57.552917 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.954285709 podStartE2EDuration="6.552896116s" podCreationTimestamp="2026-01-22 17:02:51 +0000 UTC" firstStartedPulling="2026-01-22 17:02:52.252725479 +0000 UTC m=+2064.897272179" lastFinishedPulling="2026-01-22 17:02:56.851335886 +0000 UTC m=+2069.495882586" observedRunningTime="2026-01-22 17:02:57.547225555 +0000 UTC m=+2070.191772275" watchObservedRunningTime="2026-01-22 17:02:57.552896116 +0000 UTC m=+2070.197442816" Jan 22 17:02:57 crc kubenswrapper[4704]: I0122 17:02:57.862976 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.032393 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7305dd99-bb45-4b18-b6df-923aded1f77a-operator-scripts\") pod \"7305dd99-bb45-4b18-b6df-923aded1f77a\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.032520 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl876\" (UniqueName: \"kubernetes.io/projected/7305dd99-bb45-4b18-b6df-923aded1f77a-kube-api-access-hl876\") pod \"7305dd99-bb45-4b18-b6df-923aded1f77a\" (UID: \"7305dd99-bb45-4b18-b6df-923aded1f77a\") " Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.033300 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7305dd99-bb45-4b18-b6df-923aded1f77a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7305dd99-bb45-4b18-b6df-923aded1f77a" (UID: "7305dd99-bb45-4b18-b6df-923aded1f77a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.033689 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7305dd99-bb45-4b18-b6df-923aded1f77a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.038200 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7305dd99-bb45-4b18-b6df-923aded1f77a-kube-api-access-hl876" (OuterVolumeSpecName: "kube-api-access-hl876") pod "7305dd99-bb45-4b18-b6df-923aded1f77a" (UID: "7305dd99-bb45-4b18-b6df-923aded1f77a"). InnerVolumeSpecName "kube-api-access-hl876". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.137982 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl876\" (UniqueName: \"kubernetes.io/projected/7305dd99-bb45-4b18-b6df-923aded1f77a-kube-api-access-hl876\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.520731 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wjskj" Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.525993 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wjskj" event={"ID":"7305dd99-bb45-4b18-b6df-923aded1f77a","Type":"ContainerDied","Data":"1952e442077ea2b0758a540e688b1ae00077064dede12bb8b2b96cd925addf99"} Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.526054 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1952e442077ea2b0758a540e688b1ae00077064dede12bb8b2b96cd925addf99" Jan 22 17:02:58 crc kubenswrapper[4704]: I0122 17:02:58.918230 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.053656 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7xbs\" (UniqueName: \"kubernetes.io/projected/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-kube-api-access-s7xbs\") pod \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.053747 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-operator-scripts\") pod \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\" (UID: \"e513855f-00f7-4f6d-95f9-eb83a01a2e3c\") " Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.057273 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e513855f-00f7-4f6d-95f9-eb83a01a2e3c" (UID: "e513855f-00f7-4f6d-95f9-eb83a01a2e3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.069253 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-kube-api-access-s7xbs" (OuterVolumeSpecName: "kube-api-access-s7xbs") pod "e513855f-00f7-4f6d-95f9-eb83a01a2e3c" (UID: "e513855f-00f7-4f6d-95f9-eb83a01a2e3c"). InnerVolumeSpecName "kube-api-access-s7xbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.158318 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7xbs\" (UniqueName: \"kubernetes.io/projected/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-kube-api-access-s7xbs\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.158368 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e513855f-00f7-4f6d-95f9-eb83a01a2e3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.528316 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" event={"ID":"e513855f-00f7-4f6d-95f9-eb83a01a2e3c","Type":"ContainerDied","Data":"6b3feaa4ea58ed92911910c7401246364760a9cb24751047d489cdffbec1aa94"} Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.528874 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3feaa4ea58ed92911910c7401246364760a9cb24751047d489cdffbec1aa94" Jan 22 17:02:59 crc kubenswrapper[4704]: I0122 17:02:59.528425 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.203856 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-spr7b"] Jan 22 17:03:00 crc kubenswrapper[4704]: E0122 17:03:00.204222 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7305dd99-bb45-4b18-b6df-923aded1f77a" containerName="mariadb-database-create" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.204241 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="7305dd99-bb45-4b18-b6df-923aded1f77a" containerName="mariadb-database-create" Jan 22 17:03:00 crc kubenswrapper[4704]: E0122 17:03:00.204262 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e513855f-00f7-4f6d-95f9-eb83a01a2e3c" containerName="mariadb-account-create-update" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.204272 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e513855f-00f7-4f6d-95f9-eb83a01a2e3c" containerName="mariadb-account-create-update" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.204472 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="7305dd99-bb45-4b18-b6df-923aded1f77a" containerName="mariadb-database-create" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.204497 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e513855f-00f7-4f6d-95f9-eb83a01a2e3c" containerName="mariadb-account-create-update" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.205189 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.207513 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.207628 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-g8229" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.218917 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-spr7b"] Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.278142 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-config-data\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.278268 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlscd\" (UniqueName: \"kubernetes.io/projected/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-kube-api-access-rlscd\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.278318 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-db-sync-config-data\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.278354 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.379870 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-config-data\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.379969 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlscd\" (UniqueName: \"kubernetes.io/projected/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-kube-api-access-rlscd\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.380027 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-db-sync-config-data\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.380058 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.388570 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.389043 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-db-sync-config-data\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.393587 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-config-data\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.396190 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlscd\" (UniqueName: \"kubernetes.io/projected/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-kube-api-access-rlscd\") pod \"watcher-kuttl-db-sync-spr7b\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:00 crc kubenswrapper[4704]: I0122 17:03:00.520823 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:01 crc kubenswrapper[4704]: I0122 17:03:01.011056 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-spr7b"] Jan 22 17:03:01 crc kubenswrapper[4704]: W0122 17:03:01.016767 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf1578a7_3cf1_4ef6_a5d4_281f7eaf8154.slice/crio-a2cb919f4ea78bf75d8780f2895eb8b607dc1899c8a95653ca16275ea350df22 WatchSource:0}: Error finding container a2cb919f4ea78bf75d8780f2895eb8b607dc1899c8a95653ca16275ea350df22: Status 404 returned error can't find the container with id a2cb919f4ea78bf75d8780f2895eb8b607dc1899c8a95653ca16275ea350df22 Jan 22 17:03:01 crc kubenswrapper[4704]: I0122 17:03:01.541755 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" event={"ID":"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154","Type":"ContainerStarted","Data":"8709fb3024b9a4277d000a5910dd5c0a92d322187031c389dd7971660a9c4f66"} Jan 22 17:03:01 crc kubenswrapper[4704]: I0122 17:03:01.542118 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" event={"ID":"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154","Type":"ContainerStarted","Data":"a2cb919f4ea78bf75d8780f2895eb8b607dc1899c8a95653ca16275ea350df22"} Jan 22 17:03:01 crc kubenswrapper[4704]: I0122 17:03:01.558583 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" podStartSLOduration=1.5585642069999999 podStartE2EDuration="1.558564207s" podCreationTimestamp="2026-01-22 17:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:01.554593294 +0000 UTC m=+2074.199140024" watchObservedRunningTime="2026-01-22 17:03:01.558564207 +0000 UTC m=+2074.203110897" Jan 22 17:03:04 crc kubenswrapper[4704]: I0122 17:03:04.569396 4704 generic.go:334] "Generic (PLEG): container finished" podID="bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" containerID="8709fb3024b9a4277d000a5910dd5c0a92d322187031c389dd7971660a9c4f66" exitCode=0 Jan 22 17:03:04 crc kubenswrapper[4704]: I0122 17:03:04.569557 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" event={"ID":"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154","Type":"ContainerDied","Data":"8709fb3024b9a4277d000a5910dd5c0a92d322187031c389dd7971660a9c4f66"} Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.158535 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.288287 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlscd\" (UniqueName: \"kubernetes.io/projected/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-kube-api-access-rlscd\") pod \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.288355 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-db-sync-config-data\") pod \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.288551 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-config-data\") pod \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.288588 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-combined-ca-bundle\") pod \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\" (UID: \"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154\") " Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.294953 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-kube-api-access-rlscd" (OuterVolumeSpecName: "kube-api-access-rlscd") pod "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" (UID: "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154"). InnerVolumeSpecName "kube-api-access-rlscd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.295188 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" (UID: "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.314113 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" (UID: "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.341010 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-config-data" (OuterVolumeSpecName: "config-data") pod "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" (UID: "bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.391270 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlscd\" (UniqueName: \"kubernetes.io/projected/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-kube-api-access-rlscd\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.391312 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.391327 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.391338 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.591781 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" event={"ID":"bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154","Type":"ContainerDied","Data":"a2cb919f4ea78bf75d8780f2895eb8b607dc1899c8a95653ca16275ea350df22"} Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.591837 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-spr7b" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.591840 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2cb919f4ea78bf75d8780f2895eb8b607dc1899c8a95653ca16275ea350df22" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.911456 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:06 crc kubenswrapper[4704]: E0122 17:03:06.912044 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" containerName="watcher-kuttl-db-sync" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.912059 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" containerName="watcher-kuttl-db-sync" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.912234 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" containerName="watcher-kuttl-db-sync" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.913128 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.915616 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-g8229" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.915855 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.925143 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.964856 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.965865 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.969574 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.976513 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.977735 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.985822 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 17:03:06 crc kubenswrapper[4704]: I0122 17:03:06.986211 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.003212 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.076984 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hrhmn"] Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.084745 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-hrhmn"] Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100458 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjh5\" (UniqueName: \"kubernetes.io/projected/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-kube-api-access-8hjh5\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100510 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100542 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100573 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043253c9-c0fa-4002-8d32-460323cf7865-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100599 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100653 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100674 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100700 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6l9x\" (UniqueName: \"kubernetes.io/projected/043253c9-c0fa-4002-8d32-460323cf7865-kube-api-access-r6l9x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100728 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100756 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100798 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100837 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100860 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100887 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100909 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c762f-e436-4440-8400-e3d0de1e4035-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100953 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k79k7\" (UniqueName: \"kubernetes.io/projected/670c762f-e436-4440-8400-e3d0de1e4035-kube-api-access-k79k7\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.100981 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202121 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202169 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202197 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6l9x\" (UniqueName: \"kubernetes.io/projected/043253c9-c0fa-4002-8d32-460323cf7865-kube-api-access-r6l9x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202227 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202255 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202287 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202328 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202360 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202388 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202416 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c762f-e436-4440-8400-e3d0de1e4035-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202461 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k79k7\" (UniqueName: \"kubernetes.io/projected/670c762f-e436-4440-8400-e3d0de1e4035-kube-api-access-k79k7\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202493 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202543 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hjh5\" (UniqueName: \"kubernetes.io/projected/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-kube-api-access-8hjh5\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202568 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202596 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202620 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043253c9-c0fa-4002-8d32-460323cf7865-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.202650 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.203175 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.203747 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c762f-e436-4440-8400-e3d0de1e4035-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.204173 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043253c9-c0fa-4002-8d32-460323cf7865-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.207557 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.208919 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.209256 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.209685 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.211012 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.211366 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.211366 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.220857 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.221266 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.222429 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.222822 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.223558 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hjh5\" (UniqueName: \"kubernetes.io/projected/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-kube-api-access-8hjh5\") pod \"watcher-kuttl-api-0\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.225329 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6l9x\" (UniqueName: \"kubernetes.io/projected/043253c9-c0fa-4002-8d32-460323cf7865-kube-api-access-r6l9x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.228829 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k79k7\" (UniqueName: \"kubernetes.io/projected/670c762f-e436-4440-8400-e3d0de1e4035-kube-api-access-k79k7\") pod \"watcher-kuttl-applier-0\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.284098 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.301347 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.309688 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.652940 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8" path="/var/lib/kubelet/pods/cf76aeba-f8c8-4df1-bdcf-c1995b66cdd8/volumes" Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.797941 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.816046 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:07 crc kubenswrapper[4704]: W0122 17:03:07.827689 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043253c9_c0fa_4002_8d32_460323cf7865.slice/crio-77d158df5d4da5befa5dc6c0cd477ccdfdc7231de1773d9d6f0bed54144e8c10 WatchSource:0}: Error finding container 77d158df5d4da5befa5dc6c0cd477ccdfdc7231de1773d9d6f0bed54144e8c10: Status 404 returned error can't find the container with id 77d158df5d4da5befa5dc6c0cd477ccdfdc7231de1773d9d6f0bed54144e8c10 Jan 22 17:03:07 crc kubenswrapper[4704]: I0122 17:03:07.922298 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.606715 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"043253c9-c0fa-4002-8d32-460323cf7865","Type":"ContainerStarted","Data":"3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd"} Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.607099 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"043253c9-c0fa-4002-8d32-460323cf7865","Type":"ContainerStarted","Data":"77d158df5d4da5befa5dc6c0cd477ccdfdc7231de1773d9d6f0bed54144e8c10"} Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.608918 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"670c762f-e436-4440-8400-e3d0de1e4035","Type":"ContainerStarted","Data":"5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db"} Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.608961 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"670c762f-e436-4440-8400-e3d0de1e4035","Type":"ContainerStarted","Data":"dbc64a6d07b361fbe4e7b30ff2406e04e47caf2589a9eaa8e0462099b25c221c"} Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.611236 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f4aedecd-1eec-4fa2-92e0-0a999ec82af6","Type":"ContainerStarted","Data":"ba214b871105064097a952c3226bafe902f3e48627231c9f49034ae16f422bfd"} Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.611663 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f4aedecd-1eec-4fa2-92e0-0a999ec82af6","Type":"ContainerStarted","Data":"d2e29c1cdfe47bc3cf34020e3e385ace077790466d0201a7b74222bb202313c3"} Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.611687 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f4aedecd-1eec-4fa2-92e0-0a999ec82af6","Type":"ContainerStarted","Data":"b2d7e0623ae59247f1b13c851784f14c0bffe8d25c4f15b9e00c3cd432f6b187"} Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.611707 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.643436 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.643408601 podStartE2EDuration="2.643408601s" podCreationTimestamp="2026-01-22 17:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:08.627017686 +0000 UTC m=+2081.271564406" watchObservedRunningTime="2026-01-22 17:03:08.643408601 +0000 UTC m=+2081.287955301" Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.660685 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.66066647 podStartE2EDuration="2.66066647s" podCreationTimestamp="2026-01-22 17:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:08.649616917 +0000 UTC m=+2081.294163617" watchObservedRunningTime="2026-01-22 17:03:08.66066647 +0000 UTC m=+2081.305213170" Jan 22 17:03:08 crc kubenswrapper[4704]: I0122 17:03:08.677147 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.677126417 podStartE2EDuration="2.677126417s" podCreationTimestamp="2026-01-22 17:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:08.676396346 +0000 UTC m=+2081.320943046" watchObservedRunningTime="2026-01-22 17:03:08.677126417 +0000 UTC m=+2081.321673147" Jan 22 17:03:11 crc kubenswrapper[4704]: I0122 17:03:11.029744 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:12 crc kubenswrapper[4704]: I0122 17:03:12.284495 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:12 crc kubenswrapper[4704]: I0122 17:03:12.301824 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.285037 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.288616 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.302237 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.311410 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.333022 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.344966 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.697044 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.720221 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.725053 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:17 crc kubenswrapper[4704]: I0122 17:03:17.734674 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:19 crc kubenswrapper[4704]: I0122 17:03:19.986990 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:19 crc kubenswrapper[4704]: I0122 17:03:19.987648 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-central-agent" containerID="cri-o://1d0e0b5328d429949a308967ecff9cba676de594c8f8d01f2328024d50cf081a" gracePeriod=30 Jan 22 17:03:19 crc kubenswrapper[4704]: I0122 17:03:19.987739 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="sg-core" containerID="cri-o://3e8386af4ee1a1f191aef148b7218903efa75f5b7b2fcbf7c271516253c3486a" gracePeriod=30 Jan 22 17:03:19 crc kubenswrapper[4704]: I0122 17:03:19.987812 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-notification-agent" containerID="cri-o://6b383eb4359703034860b20fbc4703eb6f19ce0405a09f5e6278841f3c06020a" gracePeriod=30 Jan 22 17:03:19 crc kubenswrapper[4704]: I0122 17:03:19.987906 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="proxy-httpd" containerID="cri-o://83f2f1ee85d835b91648abd275edbc076524c973ee8b5a2a421a89c3a7968cc5" gracePeriod=30 Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.092261 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.214:3000/\": read tcp 10.217.0.2:44048->10.217.0.214:3000: read: connection reset by peer" Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728497 4704 generic.go:334] "Generic (PLEG): container finished" podID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerID="83f2f1ee85d835b91648abd275edbc076524c973ee8b5a2a421a89c3a7968cc5" exitCode=0 Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728530 4704 generic.go:334] "Generic (PLEG): container finished" podID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerID="3e8386af4ee1a1f191aef148b7218903efa75f5b7b2fcbf7c271516253c3486a" exitCode=2 Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728542 4704 generic.go:334] "Generic (PLEG): container finished" podID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerID="6b383eb4359703034860b20fbc4703eb6f19ce0405a09f5e6278841f3c06020a" exitCode=0 Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728551 4704 generic.go:334] "Generic (PLEG): container finished" podID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerID="1d0e0b5328d429949a308967ecff9cba676de594c8f8d01f2328024d50cf081a" exitCode=0 Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728574 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerDied","Data":"83f2f1ee85d835b91648abd275edbc076524c973ee8b5a2a421a89c3a7968cc5"} Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728602 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerDied","Data":"3e8386af4ee1a1f191aef148b7218903efa75f5b7b2fcbf7c271516253c3486a"} Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728614 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerDied","Data":"6b383eb4359703034860b20fbc4703eb6f19ce0405a09f5e6278841f3c06020a"} Jan 22 17:03:20 crc kubenswrapper[4704]: I0122 17:03:20.728625 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerDied","Data":"1d0e0b5328d429949a308967ecff9cba676de594c8f8d01f2328024d50cf081a"} Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.003359 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150165 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkzw9\" (UniqueName: \"kubernetes.io/projected/806c3a9d-0a6b-4742-acb5-df18392221e9-kube-api-access-pkzw9\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150219 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-config-data\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150263 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-sg-core-conf-yaml\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150282 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-log-httpd\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150302 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-combined-ca-bundle\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150389 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-run-httpd\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150406 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-scripts\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.150475 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-ceilometer-tls-certs\") pod \"806c3a9d-0a6b-4742-acb5-df18392221e9\" (UID: \"806c3a9d-0a6b-4742-acb5-df18392221e9\") " Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.151046 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.151769 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.157391 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-scripts" (OuterVolumeSpecName: "scripts") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.157556 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806c3a9d-0a6b-4742-acb5-df18392221e9-kube-api-access-pkzw9" (OuterVolumeSpecName: "kube-api-access-pkzw9") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "kube-api-access-pkzw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.175115 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.197475 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.219537 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.244683 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-config-data" (OuterVolumeSpecName: "config-data") pod "806c3a9d-0a6b-4742-acb5-df18392221e9" (UID: "806c3a9d-0a6b-4742-acb5-df18392221e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252705 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252753 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252768 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252781 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/806c3a9d-0a6b-4742-acb5-df18392221e9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252810 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252824 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252839 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkzw9\" (UniqueName: \"kubernetes.io/projected/806c3a9d-0a6b-4742-acb5-df18392221e9-kube-api-access-pkzw9\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.252852 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/806c3a9d-0a6b-4742-acb5-df18392221e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.739959 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"806c3a9d-0a6b-4742-acb5-df18392221e9","Type":"ContainerDied","Data":"56dd816eda77f0ba74e433acddbb76f7ed13829e11054405a38950f72b5de539"} Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.740422 4704 scope.go:117] "RemoveContainer" containerID="83f2f1ee85d835b91648abd275edbc076524c973ee8b5a2a421a89c3a7968cc5" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.740203 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.770686 4704 scope.go:117] "RemoveContainer" containerID="3e8386af4ee1a1f191aef148b7218903efa75f5b7b2fcbf7c271516253c3486a" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.771609 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.777957 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.794904 4704 scope.go:117] "RemoveContainer" containerID="6b383eb4359703034860b20fbc4703eb6f19ce0405a09f5e6278841f3c06020a" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.798739 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:21 crc kubenswrapper[4704]: E0122 17:03:21.799053 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-notification-agent" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799069 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-notification-agent" Jan 22 17:03:21 crc kubenswrapper[4704]: E0122 17:03:21.799088 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-central-agent" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799095 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-central-agent" Jan 22 17:03:21 crc kubenswrapper[4704]: E0122 17:03:21.799107 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="proxy-httpd" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799113 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="proxy-httpd" Jan 22 17:03:21 crc kubenswrapper[4704]: E0122 17:03:21.799129 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="sg-core" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799135 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="sg-core" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799265 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-notification-agent" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799274 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="ceilometer-central-agent" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799282 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="sg-core" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.799296 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" containerName="proxy-httpd" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.800803 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.813552 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.814350 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.814490 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.819002 4704 scope.go:117] "RemoveContainer" containerID="1d0e0b5328d429949a308967ecff9cba676de594c8f8d01f2328024d50cf081a" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.829752 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963378 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-config-data\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963449 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963467 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963515 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gck5\" (UniqueName: \"kubernetes.io/projected/6dcd69f7-877e-435c-94bf-56e360ae4c8f-kube-api-access-2gck5\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963533 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-scripts\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963549 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963582 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:21 crc kubenswrapper[4704]: I0122 17:03:21.963645 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065130 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065187 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065222 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-config-data\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065284 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065317 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065364 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gck5\" (UniqueName: \"kubernetes.io/projected/6dcd69f7-877e-435c-94bf-56e360ae4c8f-kube-api-access-2gck5\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065383 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-scripts\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065401 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.065691 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.066883 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.069810 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.069832 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-scripts\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.070161 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.071962 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.075879 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-config-data\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.088756 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gck5\" (UniqueName: \"kubernetes.io/projected/6dcd69f7-877e-435c-94bf-56e360ae4c8f-kube-api-access-2gck5\") pod \"ceilometer-0\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.137200 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.603836 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:22 crc kubenswrapper[4704]: I0122 17:03:22.748135 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerStarted","Data":"67cdc8ffd47e597f5159a90918207b1215989e4bd399c8580a6f1ee152e34065"} Jan 22 17:03:23 crc kubenswrapper[4704]: I0122 17:03:23.643526 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806c3a9d-0a6b-4742-acb5-df18392221e9" path="/var/lib/kubelet/pods/806c3a9d-0a6b-4742-acb5-df18392221e9/volumes" Jan 22 17:03:23 crc kubenswrapper[4704]: I0122 17:03:23.848122 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerStarted","Data":"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2"} Jan 22 17:03:24 crc kubenswrapper[4704]: I0122 17:03:24.863078 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerStarted","Data":"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16"} Jan 22 17:03:24 crc kubenswrapper[4704]: I0122 17:03:24.863367 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerStarted","Data":"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf"} Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.546027 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-spr7b"] Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.553363 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-spr7b"] Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.598430 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.598867 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="043253c9-c0fa-4002-8d32-460323cf7865" containerName="watcher-decision-engine" containerID="cri-o://3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd" gracePeriod=30 Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.643335 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154" path="/var/lib/kubelet/pods/bf1578a7-3cf1-4ef6-a5d4-281f7eaf8154/volumes" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.643856 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherd3cb-account-delete-bxwsq"] Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.645034 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.651528 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.651833 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="670c762f-e436-4440-8400-e3d0de1e4035" containerName="watcher-applier" containerID="cri-o://5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" gracePeriod=30 Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.665458 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherd3cb-account-delete-bxwsq"] Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.725091 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.725321 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-kuttl-api-log" containerID="cri-o://d2e29c1cdfe47bc3cf34020e3e385ace077790466d0201a7b74222bb202313c3" gracePeriod=30 Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.725781 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-api" containerID="cri-o://ba214b871105064097a952c3226bafe902f3e48627231c9f49034ae16f422bfd" gracePeriod=30 Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.795482 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7174d73-f6b3-4ca5-84c8-8bc499628927-operator-scripts\") pod \"watcherd3cb-account-delete-bxwsq\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.795563 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdx9\" (UniqueName: \"kubernetes.io/projected/e7174d73-f6b3-4ca5-84c8-8bc499628927-kube-api-access-dpdx9\") pod \"watcherd3cb-account-delete-bxwsq\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.885501 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerID="d2e29c1cdfe47bc3cf34020e3e385ace077790466d0201a7b74222bb202313c3" exitCode=143 Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.885592 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f4aedecd-1eec-4fa2-92e0-0a999ec82af6","Type":"ContainerDied","Data":"d2e29c1cdfe47bc3cf34020e3e385ace077790466d0201a7b74222bb202313c3"} Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.897407 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7174d73-f6b3-4ca5-84c8-8bc499628927-operator-scripts\") pod \"watcherd3cb-account-delete-bxwsq\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.897473 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpdx9\" (UniqueName: \"kubernetes.io/projected/e7174d73-f6b3-4ca5-84c8-8bc499628927-kube-api-access-dpdx9\") pod \"watcherd3cb-account-delete-bxwsq\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.898559 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7174d73-f6b3-4ca5-84c8-8bc499628927-operator-scripts\") pod \"watcherd3cb-account-delete-bxwsq\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.919028 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpdx9\" (UniqueName: \"kubernetes.io/projected/e7174d73-f6b3-4ca5-84c8-8bc499628927-kube-api-access-dpdx9\") pod \"watcherd3cb-account-delete-bxwsq\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:25 crc kubenswrapper[4704]: I0122 17:03:25.968722 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.484909 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherd3cb-account-delete-bxwsq"] Jan 22 17:03:26 crc kubenswrapper[4704]: W0122 17:03:26.487905 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7174d73_f6b3_4ca5_84c8_8bc499628927.slice/crio-e618c26626b8faefb058ee0507c1238f4e78196695411755e611882c0342797f WatchSource:0}: Error finding container e618c26626b8faefb058ee0507c1238f4e78196695411755e611882c0342797f: Status 404 returned error can't find the container with id e618c26626b8faefb058ee0507c1238f4e78196695411755e611882c0342797f Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.894998 4704 generic.go:334] "Generic (PLEG): container finished" podID="e7174d73-f6b3-4ca5-84c8-8bc499628927" containerID="54a4de63cf0d02595e13ff122de45eddb78e1fe379066e42dc4936578b6e057c" exitCode=0 Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.895050 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" event={"ID":"e7174d73-f6b3-4ca5-84c8-8bc499628927","Type":"ContainerDied","Data":"54a4de63cf0d02595e13ff122de45eddb78e1fe379066e42dc4936578b6e057c"} Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.895125 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" event={"ID":"e7174d73-f6b3-4ca5-84c8-8bc499628927","Type":"ContainerStarted","Data":"e618c26626b8faefb058ee0507c1238f4e78196695411755e611882c0342797f"} Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.898442 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerStarted","Data":"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7"} Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.898734 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.902906 4704 generic.go:334] "Generic (PLEG): container finished" podID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerID="ba214b871105064097a952c3226bafe902f3e48627231c9f49034ae16f422bfd" exitCode=0 Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.902942 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f4aedecd-1eec-4fa2-92e0-0a999ec82af6","Type":"ContainerDied","Data":"ba214b871105064097a952c3226bafe902f3e48627231c9f49034ae16f422bfd"} Jan 22 17:03:26 crc kubenswrapper[4704]: I0122 17:03:26.933823 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.84263026 podStartE2EDuration="5.933805428s" podCreationTimestamp="2026-01-22 17:03:21 +0000 UTC" firstStartedPulling="2026-01-22 17:03:22.604406259 +0000 UTC m=+2095.248952959" lastFinishedPulling="2026-01-22 17:03:25.695581427 +0000 UTC m=+2098.340128127" observedRunningTime="2026-01-22 17:03:26.924763592 +0000 UTC m=+2099.569310292" watchObservedRunningTime="2026-01-22 17:03:26.933805428 +0000 UTC m=+2099.578352128" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.136375 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.224063 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-config-data\") pod \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.224223 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-custom-prometheus-ca\") pod \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.225849 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hjh5\" (UniqueName: \"kubernetes.io/projected/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-kube-api-access-8hjh5\") pod \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.225936 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-combined-ca-bundle\") pod \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.226003 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-logs\") pod \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.226043 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-cert-memcached-mtls\") pod \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\" (UID: \"f4aedecd-1eec-4fa2-92e0-0a999ec82af6\") " Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.226667 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-logs" (OuterVolumeSpecName: "logs") pod "f4aedecd-1eec-4fa2-92e0-0a999ec82af6" (UID: "f4aedecd-1eec-4fa2-92e0-0a999ec82af6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.227272 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.244205 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-kube-api-access-8hjh5" (OuterVolumeSpecName: "kube-api-access-8hjh5") pod "f4aedecd-1eec-4fa2-92e0-0a999ec82af6" (UID: "f4aedecd-1eec-4fa2-92e0-0a999ec82af6"). InnerVolumeSpecName "kube-api-access-8hjh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.251875 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4aedecd-1eec-4fa2-92e0-0a999ec82af6" (UID: "f4aedecd-1eec-4fa2-92e0-0a999ec82af6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.258734 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f4aedecd-1eec-4fa2-92e0-0a999ec82af6" (UID: "f4aedecd-1eec-4fa2-92e0-0a999ec82af6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.274339 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-config-data" (OuterVolumeSpecName: "config-data") pod "f4aedecd-1eec-4fa2-92e0-0a999ec82af6" (UID: "f4aedecd-1eec-4fa2-92e0-0a999ec82af6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:27 crc kubenswrapper[4704]: E0122 17:03:27.304466 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:03:27 crc kubenswrapper[4704]: E0122 17:03:27.306020 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:03:27 crc kubenswrapper[4704]: E0122 17:03:27.307134 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:03:27 crc kubenswrapper[4704]: E0122 17:03:27.307256 4704 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="670c762f-e436-4440-8400-e3d0de1e4035" containerName="watcher-applier" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.315963 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f4aedecd-1eec-4fa2-92e0-0a999ec82af6" (UID: "f4aedecd-1eec-4fa2-92e0-0a999ec82af6"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.329106 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.329135 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.329146 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.329154 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.329163 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hjh5\" (UniqueName: \"kubernetes.io/projected/f4aedecd-1eec-4fa2-92e0-0a999ec82af6-kube-api-access-8hjh5\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.912712 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f4aedecd-1eec-4fa2-92e0-0a999ec82af6","Type":"ContainerDied","Data":"b2d7e0623ae59247f1b13c851784f14c0bffe8d25c4f15b9e00c3cd432f6b187"} Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.913071 4704 scope.go:117] "RemoveContainer" containerID="ba214b871105064097a952c3226bafe902f3e48627231c9f49034ae16f422bfd" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.912862 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.934148 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.938886 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:27 crc kubenswrapper[4704]: I0122 17:03:27.940937 4704 scope.go:117] "RemoveContainer" containerID="d2e29c1cdfe47bc3cf34020e3e385ace077790466d0201a7b74222bb202313c3" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.303703 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.365395 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.444890 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7174d73-f6b3-4ca5-84c8-8bc499628927-operator-scripts\") pod \"e7174d73-f6b3-4ca5-84c8-8bc499628927\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.444969 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpdx9\" (UniqueName: \"kubernetes.io/projected/e7174d73-f6b3-4ca5-84c8-8bc499628927-kube-api-access-dpdx9\") pod \"e7174d73-f6b3-4ca5-84c8-8bc499628927\" (UID: \"e7174d73-f6b3-4ca5-84c8-8bc499628927\") " Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.445390 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7174d73-f6b3-4ca5-84c8-8bc499628927-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7174d73-f6b3-4ca5-84c8-8bc499628927" (UID: "e7174d73-f6b3-4ca5-84c8-8bc499628927"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.453144 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7174d73-f6b3-4ca5-84c8-8bc499628927-kube-api-access-dpdx9" (OuterVolumeSpecName: "kube-api-access-dpdx9") pod "e7174d73-f6b3-4ca5-84c8-8bc499628927" (UID: "e7174d73-f6b3-4ca5-84c8-8bc499628927"). InnerVolumeSpecName "kube-api-access-dpdx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.547280 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7174d73-f6b3-4ca5-84c8-8bc499628927-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.547626 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpdx9\" (UniqueName: \"kubernetes.io/projected/e7174d73-f6b3-4ca5-84c8-8bc499628927-kube-api-access-dpdx9\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.935903 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" event={"ID":"e7174d73-f6b3-4ca5-84c8-8bc499628927","Type":"ContainerDied","Data":"e618c26626b8faefb058ee0507c1238f4e78196695411755e611882c0342797f"} Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.935960 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e618c26626b8faefb058ee0507c1238f4e78196695411755e611882c0342797f" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.935923 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd3cb-account-delete-bxwsq" Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.936058 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-central-agent" containerID="cri-o://e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" gracePeriod=30 Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.936093 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="sg-core" containerID="cri-o://3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" gracePeriod=30 Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.936095 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="proxy-httpd" containerID="cri-o://da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" gracePeriod=30 Jan 22 17:03:28 crc kubenswrapper[4704]: I0122 17:03:28.936161 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-notification-agent" containerID="cri-o://102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" gracePeriod=30 Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.587887 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.656615 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" path="/var/lib/kubelet/pods/f4aedecd-1eec-4fa2-92e0-0a999ec82af6/volumes" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.663602 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c762f-e436-4440-8400-e3d0de1e4035-logs\") pod \"670c762f-e436-4440-8400-e3d0de1e4035\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.663693 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-combined-ca-bundle\") pod \"670c762f-e436-4440-8400-e3d0de1e4035\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.663736 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-config-data\") pod \"670c762f-e436-4440-8400-e3d0de1e4035\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.663757 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-cert-memcached-mtls\") pod \"670c762f-e436-4440-8400-e3d0de1e4035\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.663913 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k79k7\" (UniqueName: \"kubernetes.io/projected/670c762f-e436-4440-8400-e3d0de1e4035-kube-api-access-k79k7\") pod \"670c762f-e436-4440-8400-e3d0de1e4035\" (UID: \"670c762f-e436-4440-8400-e3d0de1e4035\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.664152 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670c762f-e436-4440-8400-e3d0de1e4035-logs" (OuterVolumeSpecName: "logs") pod "670c762f-e436-4440-8400-e3d0de1e4035" (UID: "670c762f-e436-4440-8400-e3d0de1e4035"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.664538 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670c762f-e436-4440-8400-e3d0de1e4035-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.671365 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670c762f-e436-4440-8400-e3d0de1e4035-kube-api-access-k79k7" (OuterVolumeSpecName: "kube-api-access-k79k7") pod "670c762f-e436-4440-8400-e3d0de1e4035" (UID: "670c762f-e436-4440-8400-e3d0de1e4035"). InnerVolumeSpecName "kube-api-access-k79k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.720524 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "670c762f-e436-4440-8400-e3d0de1e4035" (UID: "670c762f-e436-4440-8400-e3d0de1e4035"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.730305 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-config-data" (OuterVolumeSpecName: "config-data") pod "670c762f-e436-4440-8400-e3d0de1e4035" (UID: "670c762f-e436-4440-8400-e3d0de1e4035"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.744843 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "670c762f-e436-4440-8400-e3d0de1e4035" (UID: "670c762f-e436-4440-8400-e3d0de1e4035"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.758726 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.765817 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.765859 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.765872 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670c762f-e436-4440-8400-e3d0de1e4035-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.765883 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k79k7\" (UniqueName: \"kubernetes.io/projected/670c762f-e436-4440-8400-e3d0de1e4035-kube-api-access-k79k7\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.866335 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-run-httpd\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.866844 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.866919 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-scripts\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.866983 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gck5\" (UniqueName: \"kubernetes.io/projected/6dcd69f7-877e-435c-94bf-56e360ae4c8f-kube-api-access-2gck5\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.867050 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-ceilometer-tls-certs\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.867412 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-combined-ca-bundle\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.867464 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-config-data\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.867486 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-log-httpd\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.867534 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-sg-core-conf-yaml\") pod \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\" (UID: \"6dcd69f7-877e-435c-94bf-56e360ae4c8f\") " Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.868012 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.868260 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.870204 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-scripts" (OuterVolumeSpecName: "scripts") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.870769 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcd69f7-877e-435c-94bf-56e360ae4c8f-kube-api-access-2gck5" (OuterVolumeSpecName: "kube-api-access-2gck5") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "kube-api-access-2gck5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.899265 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.913387 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.947974 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.950964 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-config-data" (OuterVolumeSpecName: "config-data") pod "6dcd69f7-877e-435c-94bf-56e360ae4c8f" (UID: "6dcd69f7-877e-435c-94bf-56e360ae4c8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951724 4704 generic.go:334] "Generic (PLEG): container finished" podID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerID="da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" exitCode=0 Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951747 4704 generic.go:334] "Generic (PLEG): container finished" podID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerID="3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" exitCode=2 Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951755 4704 generic.go:334] "Generic (PLEG): container finished" podID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerID="102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" exitCode=0 Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951763 4704 generic.go:334] "Generic (PLEG): container finished" podID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerID="e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" exitCode=0 Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951806 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951819 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerDied","Data":"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7"} Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951845 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerDied","Data":"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16"} Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951855 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerDied","Data":"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf"} Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951865 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerDied","Data":"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2"} Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951874 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6dcd69f7-877e-435c-94bf-56e360ae4c8f","Type":"ContainerDied","Data":"67cdc8ffd47e597f5159a90918207b1215989e4bd399c8580a6f1ee152e34065"} Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.951887 4704 scope.go:117] "RemoveContainer" containerID="da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.955297 4704 generic.go:334] "Generic (PLEG): container finished" podID="670c762f-e436-4440-8400-e3d0de1e4035" containerID="5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" exitCode=0 Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.955338 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"670c762f-e436-4440-8400-e3d0de1e4035","Type":"ContainerDied","Data":"5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db"} Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.955364 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"670c762f-e436-4440-8400-e3d0de1e4035","Type":"ContainerDied","Data":"dbc64a6d07b361fbe4e7b30ff2406e04e47caf2589a9eaa8e0462099b25c221c"} Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.955410 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.970116 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.970486 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.970645 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gck5\" (UniqueName: \"kubernetes.io/projected/6dcd69f7-877e-435c-94bf-56e360ae4c8f-kube-api-access-2gck5\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.970895 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.971039 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.971151 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dcd69f7-877e-435c-94bf-56e360ae4c8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.971279 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dcd69f7-877e-435c-94bf-56e360ae4c8f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:29 crc kubenswrapper[4704]: I0122 17:03:29.999353 4704 scope.go:117] "RemoveContainer" containerID="3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.005957 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.011096 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.017968 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.026016 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030546 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030836 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="sg-core" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030850 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="sg-core" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030870 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7174d73-f6b3-4ca5-84c8-8bc499628927" containerName="mariadb-account-delete" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030876 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7174d73-f6b3-4ca5-84c8-8bc499628927" containerName="mariadb-account-delete" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030888 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-kuttl-api-log" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030895 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-kuttl-api-log" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030909 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="proxy-httpd" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030915 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="proxy-httpd" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030928 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-notification-agent" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030934 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-notification-agent" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030942 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-central-agent" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030948 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-central-agent" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030956 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670c762f-e436-4440-8400-e3d0de1e4035" containerName="watcher-applier" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030961 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="670c762f-e436-4440-8400-e3d0de1e4035" containerName="watcher-applier" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.030973 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-api" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.030979 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-api" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031107 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-kuttl-api-log" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031120 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-central-agent" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031130 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="sg-core" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031135 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7174d73-f6b3-4ca5-84c8-8bc499628927" containerName="mariadb-account-delete" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031145 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="ceilometer-notification-agent" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031153 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" containerName="proxy-httpd" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031158 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="670c762f-e436-4440-8400-e3d0de1e4035" containerName="watcher-applier" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.031169 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4aedecd-1eec-4fa2-92e0-0a999ec82af6" containerName="watcher-api" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.033162 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.034989 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.036050 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.038111 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.047564 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.049165 4704 scope.go:117] "RemoveContainer" containerID="102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.145040 4704 scope.go:117] "RemoveContainer" containerID="e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.167003 4704 scope.go:117] "RemoveContainer" containerID="da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.167455 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": container with ID starting with da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7 not found: ID does not exist" containerID="da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.167484 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7"} err="failed to get container status \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": rpc error: code = NotFound desc = could not find container \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": container with ID starting with da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.167519 4704 scope.go:117] "RemoveContainer" containerID="3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.167850 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": container with ID starting with 3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16 not found: ID does not exist" containerID="3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.167874 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16"} err="failed to get container status \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": rpc error: code = NotFound desc = could not find container \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": container with ID starting with 3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.167888 4704 scope.go:117] "RemoveContainer" containerID="102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.168109 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": container with ID starting with 102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf not found: ID does not exist" containerID="102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.168136 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf"} err="failed to get container status \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": rpc error: code = NotFound desc = could not find container \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": container with ID starting with 102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.168150 4704 scope.go:117] "RemoveContainer" containerID="e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.168383 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": container with ID starting with e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2 not found: ID does not exist" containerID="e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.168405 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2"} err="failed to get container status \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": rpc error: code = NotFound desc = could not find container \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": container with ID starting with e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.168420 4704 scope.go:117] "RemoveContainer" containerID="da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.168821 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7"} err="failed to get container status \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": rpc error: code = NotFound desc = could not find container \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": container with ID starting with da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.168860 4704 scope.go:117] "RemoveContainer" containerID="3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169133 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16"} err="failed to get container status \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": rpc error: code = NotFound desc = could not find container \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": container with ID starting with 3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169154 4704 scope.go:117] "RemoveContainer" containerID="102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169410 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf"} err="failed to get container status \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": rpc error: code = NotFound desc = could not find container \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": container with ID starting with 102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169432 4704 scope.go:117] "RemoveContainer" containerID="e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169670 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2"} err="failed to get container status \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": rpc error: code = NotFound desc = could not find container \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": container with ID starting with e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169692 4704 scope.go:117] "RemoveContainer" containerID="da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169942 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7"} err="failed to get container status \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": rpc error: code = NotFound desc = could not find container \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": container with ID starting with da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.169972 4704 scope.go:117] "RemoveContainer" containerID="3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170211 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16"} err="failed to get container status \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": rpc error: code = NotFound desc = could not find container \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": container with ID starting with 3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170243 4704 scope.go:117] "RemoveContainer" containerID="102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170467 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf"} err="failed to get container status \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": rpc error: code = NotFound desc = could not find container \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": container with ID starting with 102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170487 4704 scope.go:117] "RemoveContainer" containerID="e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170698 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2"} err="failed to get container status \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": rpc error: code = NotFound desc = could not find container \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": container with ID starting with e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170716 4704 scope.go:117] "RemoveContainer" containerID="da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170935 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7"} err="failed to get container status \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": rpc error: code = NotFound desc = could not find container \"da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7\": container with ID starting with da0c9785bf82b3c2e2572e9d4b36faff570f46c861ee7f81ccc2a7b73d2263d7 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.170953 4704 scope.go:117] "RemoveContainer" containerID="3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.171149 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16"} err="failed to get container status \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": rpc error: code = NotFound desc = could not find container \"3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16\": container with ID starting with 3f21bcf7235371b2043eb30d5f221e4ba1553443d4c2dfe54bfae53426272c16 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.171169 4704 scope.go:117] "RemoveContainer" containerID="102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.171371 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf"} err="failed to get container status \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": rpc error: code = NotFound desc = could not find container \"102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf\": container with ID starting with 102a2d1c637cf93236008cd24fc1b3062e362e5e5218b91972b167c5392290bf not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.171389 4704 scope.go:117] "RemoveContainer" containerID="e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.171602 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2"} err="failed to get container status \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": rpc error: code = NotFound desc = could not find container \"e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2\": container with ID starting with e3d37958d1eccc45092a6e9174a2812025be7d1e9d76c923de64ae30961636c2 not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.171620 4704 scope.go:117] "RemoveContainer" containerID="5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.175695 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.175901 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.175938 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.175960 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-log-httpd\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.176067 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-scripts\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.176093 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-run-httpd\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.176131 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qfrh\" (UniqueName: \"kubernetes.io/projected/8109542d-f35c-4bf4-bbdf-70184e4ce35b-kube-api-access-9qfrh\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.176149 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-config-data\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.190033 4704 scope.go:117] "RemoveContainer" containerID="5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.190459 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db\": container with ID starting with 5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db not found: ID does not exist" containerID="5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.190482 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db"} err="failed to get container status \"5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db\": rpc error: code = NotFound desc = could not find container \"5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db\": container with ID starting with 5cd1a47010de204e0115f31960cf43ed52722fdde35db0b6b4106beefe1ba2db not found: ID does not exist" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277484 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-scripts\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277530 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-run-httpd\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277576 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qfrh\" (UniqueName: \"kubernetes.io/projected/8109542d-f35c-4bf4-bbdf-70184e4ce35b-kube-api-access-9qfrh\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277599 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-config-data\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277669 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277701 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277717 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.277731 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-log-httpd\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.279120 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-log-httpd\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.279159 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-run-httpd\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.282751 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.283182 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-config-data\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.284327 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.285097 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-scripts\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.286637 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.294303 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qfrh\" (UniqueName: \"kubernetes.io/projected/8109542d-f35c-4bf4-bbdf-70184e4ce35b-kube-api-access-9qfrh\") pod \"ceilometer-0\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.346906 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.354575 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.481393 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-combined-ca-bundle\") pod \"043253c9-c0fa-4002-8d32-460323cf7865\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.481832 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-config-data\") pod \"043253c9-c0fa-4002-8d32-460323cf7865\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.481883 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6l9x\" (UniqueName: \"kubernetes.io/projected/043253c9-c0fa-4002-8d32-460323cf7865-kube-api-access-r6l9x\") pod \"043253c9-c0fa-4002-8d32-460323cf7865\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.481911 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043253c9-c0fa-4002-8d32-460323cf7865-logs\") pod \"043253c9-c0fa-4002-8d32-460323cf7865\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.481997 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-custom-prometheus-ca\") pod \"043253c9-c0fa-4002-8d32-460323cf7865\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.482083 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-cert-memcached-mtls\") pod \"043253c9-c0fa-4002-8d32-460323cf7865\" (UID: \"043253c9-c0fa-4002-8d32-460323cf7865\") " Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.483283 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043253c9-c0fa-4002-8d32-460323cf7865-logs" (OuterVolumeSpecName: "logs") pod "043253c9-c0fa-4002-8d32-460323cf7865" (UID: "043253c9-c0fa-4002-8d32-460323cf7865"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.486994 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043253c9-c0fa-4002-8d32-460323cf7865-kube-api-access-r6l9x" (OuterVolumeSpecName: "kube-api-access-r6l9x") pod "043253c9-c0fa-4002-8d32-460323cf7865" (UID: "043253c9-c0fa-4002-8d32-460323cf7865"). InnerVolumeSpecName "kube-api-access-r6l9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.505045 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "043253c9-c0fa-4002-8d32-460323cf7865" (UID: "043253c9-c0fa-4002-8d32-460323cf7865"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.511210 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043253c9-c0fa-4002-8d32-460323cf7865" (UID: "043253c9-c0fa-4002-8d32-460323cf7865"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.538554 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-config-data" (OuterVolumeSpecName: "config-data") pod "043253c9-c0fa-4002-8d32-460323cf7865" (UID: "043253c9-c0fa-4002-8d32-460323cf7865"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.547532 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "043253c9-c0fa-4002-8d32-460323cf7865" (UID: "043253c9-c0fa-4002-8d32-460323cf7865"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.584052 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.584083 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.584096 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.584110 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043253c9-c0fa-4002-8d32-460323cf7865-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.584122 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6l9x\" (UniqueName: \"kubernetes.io/projected/043253c9-c0fa-4002-8d32-460323cf7865-kube-api-access-r6l9x\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.584134 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/043253c9-c0fa-4002-8d32-460323cf7865-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.663662 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wjskj"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.678112 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wjskj"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.688161 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.695578 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-d3cb-account-create-update-bt8dw"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.701464 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherd3cb-account-delete-bxwsq"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.709954 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherd3cb-account-delete-bxwsq"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.756949 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-9jk29"] Jan 22 17:03:30 crc kubenswrapper[4704]: E0122 17:03:30.757419 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043253c9-c0fa-4002-8d32-460323cf7865" containerName="watcher-decision-engine" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.757493 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="043253c9-c0fa-4002-8d32-460323cf7865" containerName="watcher-decision-engine" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.757720 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="043253c9-c0fa-4002-8d32-460323cf7865" containerName="watcher-decision-engine" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.758298 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.765289 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9jk29"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.857044 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-n9krw"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.858557 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.861394 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.870885 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.883527 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-n9krw"] Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.888727 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf6dq\" (UniqueName: \"kubernetes.io/projected/eaa2a59d-955e-4b4f-8092-fa24ba640086-kube-api-access-kf6dq\") pod \"watcher-db-create-9jk29\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.889118 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaa2a59d-955e-4b4f-8092-fa24ba640086-operator-scripts\") pod \"watcher-db-create-9jk29\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.969751 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerStarted","Data":"ccb03aa3d00456bbb90435abab722fc710cfdb583372e09eca388bb8864f8f57"} Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.977266 4704 generic.go:334] "Generic (PLEG): container finished" podID="043253c9-c0fa-4002-8d32-460323cf7865" containerID="3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd" exitCode=0 Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.977472 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.979398 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"043253c9-c0fa-4002-8d32-460323cf7865","Type":"ContainerDied","Data":"3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd"} Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.979457 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"043253c9-c0fa-4002-8d32-460323cf7865","Type":"ContainerDied","Data":"77d158df5d4da5befa5dc6c0cd477ccdfdc7231de1773d9d6f0bed54144e8c10"} Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.979481 4704 scope.go:117] "RemoveContainer" containerID="3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.990656 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f718b5d-3eed-45b7-a6eb-a63797e882d3-operator-scripts\") pod \"watcher-test-account-create-update-n9krw\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.990714 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaa2a59d-955e-4b4f-8092-fa24ba640086-operator-scripts\") pod \"watcher-db-create-9jk29\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.990785 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf6dq\" (UniqueName: \"kubernetes.io/projected/eaa2a59d-955e-4b4f-8092-fa24ba640086-kube-api-access-kf6dq\") pod \"watcher-db-create-9jk29\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.990825 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9rhv\" (UniqueName: \"kubernetes.io/projected/3f718b5d-3eed-45b7-a6eb-a63797e882d3-kube-api-access-j9rhv\") pod \"watcher-test-account-create-update-n9krw\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:30 crc kubenswrapper[4704]: I0122 17:03:30.991438 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaa2a59d-955e-4b4f-8092-fa24ba640086-operator-scripts\") pod \"watcher-db-create-9jk29\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.009780 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf6dq\" (UniqueName: \"kubernetes.io/projected/eaa2a59d-955e-4b4f-8092-fa24ba640086-kube-api-access-kf6dq\") pod \"watcher-db-create-9jk29\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.015074 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.016870 4704 scope.go:117] "RemoveContainer" containerID="3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd" Jan 22 17:03:31 crc kubenswrapper[4704]: E0122 17:03:31.017260 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd\": container with ID starting with 3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd not found: ID does not exist" containerID="3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.017307 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd"} err="failed to get container status \"3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd\": rpc error: code = NotFound desc = could not find container \"3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd\": container with ID starting with 3a7ffd68d13bacca815a02cef8ff5d8b64b891e8ed7466a22a75dc86fc84c7fd not found: ID does not exist" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.021227 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.094857 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f718b5d-3eed-45b7-a6eb-a63797e882d3-operator-scripts\") pod \"watcher-test-account-create-update-n9krw\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.094971 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9rhv\" (UniqueName: \"kubernetes.io/projected/3f718b5d-3eed-45b7-a6eb-a63797e882d3-kube-api-access-j9rhv\") pod \"watcher-test-account-create-update-n9krw\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.095617 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f718b5d-3eed-45b7-a6eb-a63797e882d3-operator-scripts\") pod \"watcher-test-account-create-update-n9krw\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.111782 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.119100 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9rhv\" (UniqueName: \"kubernetes.io/projected/3f718b5d-3eed-45b7-a6eb-a63797e882d3-kube-api-access-j9rhv\") pod \"watcher-test-account-create-update-n9krw\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.184803 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.573076 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9jk29"] Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.655637 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043253c9-c0fa-4002-8d32-460323cf7865" path="/var/lib/kubelet/pods/043253c9-c0fa-4002-8d32-460323cf7865/volumes" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.658962 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670c762f-e436-4440-8400-e3d0de1e4035" path="/var/lib/kubelet/pods/670c762f-e436-4440-8400-e3d0de1e4035/volumes" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.659541 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dcd69f7-877e-435c-94bf-56e360ae4c8f" path="/var/lib/kubelet/pods/6dcd69f7-877e-435c-94bf-56e360ae4c8f/volumes" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.660973 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7305dd99-bb45-4b18-b6df-923aded1f77a" path="/var/lib/kubelet/pods/7305dd99-bb45-4b18-b6df-923aded1f77a/volumes" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.661544 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e513855f-00f7-4f6d-95f9-eb83a01a2e3c" path="/var/lib/kubelet/pods/e513855f-00f7-4f6d-95f9-eb83a01a2e3c/volumes" Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.662171 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7174d73-f6b3-4ca5-84c8-8bc499628927" path="/var/lib/kubelet/pods/e7174d73-f6b3-4ca5-84c8-8bc499628927/volumes" Jan 22 17:03:31 crc kubenswrapper[4704]: W0122 17:03:31.684611 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f718b5d_3eed_45b7_a6eb_a63797e882d3.slice/crio-a52c3a93100487f5f2edb8def55b9f100c97e8b0091454446a11c2e978be4740 WatchSource:0}: Error finding container a52c3a93100487f5f2edb8def55b9f100c97e8b0091454446a11c2e978be4740: Status 404 returned error can't find the container with id a52c3a93100487f5f2edb8def55b9f100c97e8b0091454446a11c2e978be4740 Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.687494 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-n9krw"] Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.996935 4704 generic.go:334] "Generic (PLEG): container finished" podID="eaa2a59d-955e-4b4f-8092-fa24ba640086" containerID="61d831c6ad4b89c33ee6606088a35ebffbb494d14bcbd2c34526959184400d23" exitCode=0 Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.996994 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9jk29" event={"ID":"eaa2a59d-955e-4b4f-8092-fa24ba640086","Type":"ContainerDied","Data":"61d831c6ad4b89c33ee6606088a35ebffbb494d14bcbd2c34526959184400d23"} Jan 22 17:03:31 crc kubenswrapper[4704]: I0122 17:03:31.997054 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9jk29" event={"ID":"eaa2a59d-955e-4b4f-8092-fa24ba640086","Type":"ContainerStarted","Data":"d15307aea2370fe71289d97e85b677728d006603cf50d817bdc67250ae582fad"} Jan 22 17:03:32 crc kubenswrapper[4704]: I0122 17:03:32.000209 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" event={"ID":"3f718b5d-3eed-45b7-a6eb-a63797e882d3","Type":"ContainerStarted","Data":"8fd2fd2280010213ef609f9b7a989cc0b20bc1c6543a2e87a3a40a6fef70dd27"} Jan 22 17:03:32 crc kubenswrapper[4704]: I0122 17:03:32.000571 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" event={"ID":"3f718b5d-3eed-45b7-a6eb-a63797e882d3","Type":"ContainerStarted","Data":"a52c3a93100487f5f2edb8def55b9f100c97e8b0091454446a11c2e978be4740"} Jan 22 17:03:32 crc kubenswrapper[4704]: I0122 17:03:32.001663 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerStarted","Data":"ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3"} Jan 22 17:03:32 crc kubenswrapper[4704]: I0122 17:03:32.032892 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" podStartSLOduration=2.032869507 podStartE2EDuration="2.032869507s" podCreationTimestamp="2026-01-22 17:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:32.032229559 +0000 UTC m=+2104.676776259" watchObservedRunningTime="2026-01-22 17:03:32.032869507 +0000 UTC m=+2104.677416207" Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.011135 4704 generic.go:334] "Generic (PLEG): container finished" podID="3f718b5d-3eed-45b7-a6eb-a63797e882d3" containerID="8fd2fd2280010213ef609f9b7a989cc0b20bc1c6543a2e87a3a40a6fef70dd27" exitCode=0 Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.011218 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" event={"ID":"3f718b5d-3eed-45b7-a6eb-a63797e882d3","Type":"ContainerDied","Data":"8fd2fd2280010213ef609f9b7a989cc0b20bc1c6543a2e87a3a40a6fef70dd27"} Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.013650 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerStarted","Data":"de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68"} Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.013685 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerStarted","Data":"ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779"} Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.411331 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.537157 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf6dq\" (UniqueName: \"kubernetes.io/projected/eaa2a59d-955e-4b4f-8092-fa24ba640086-kube-api-access-kf6dq\") pod \"eaa2a59d-955e-4b4f-8092-fa24ba640086\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.537419 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaa2a59d-955e-4b4f-8092-fa24ba640086-operator-scripts\") pod \"eaa2a59d-955e-4b4f-8092-fa24ba640086\" (UID: \"eaa2a59d-955e-4b4f-8092-fa24ba640086\") " Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.538177 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa2a59d-955e-4b4f-8092-fa24ba640086-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eaa2a59d-955e-4b4f-8092-fa24ba640086" (UID: "eaa2a59d-955e-4b4f-8092-fa24ba640086"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.554271 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa2a59d-955e-4b4f-8092-fa24ba640086-kube-api-access-kf6dq" (OuterVolumeSpecName: "kube-api-access-kf6dq") pod "eaa2a59d-955e-4b4f-8092-fa24ba640086" (UID: "eaa2a59d-955e-4b4f-8092-fa24ba640086"). InnerVolumeSpecName "kube-api-access-kf6dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.639278 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaa2a59d-955e-4b4f-8092-fa24ba640086-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:33 crc kubenswrapper[4704]: I0122 17:03:33.639513 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf6dq\" (UniqueName: \"kubernetes.io/projected/eaa2a59d-955e-4b4f-8092-fa24ba640086-kube-api-access-kf6dq\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.021536 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9jk29" Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.022091 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9jk29" event={"ID":"eaa2a59d-955e-4b4f-8092-fa24ba640086","Type":"ContainerDied","Data":"d15307aea2370fe71289d97e85b677728d006603cf50d817bdc67250ae582fad"} Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.022112 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d15307aea2370fe71289d97e85b677728d006603cf50d817bdc67250ae582fad" Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.451762 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.555782 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9rhv\" (UniqueName: \"kubernetes.io/projected/3f718b5d-3eed-45b7-a6eb-a63797e882d3-kube-api-access-j9rhv\") pod \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.556041 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f718b5d-3eed-45b7-a6eb-a63797e882d3-operator-scripts\") pod \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\" (UID: \"3f718b5d-3eed-45b7-a6eb-a63797e882d3\") " Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.556603 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f718b5d-3eed-45b7-a6eb-a63797e882d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f718b5d-3eed-45b7-a6eb-a63797e882d3" (UID: "3f718b5d-3eed-45b7-a6eb-a63797e882d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.560316 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f718b5d-3eed-45b7-a6eb-a63797e882d3-kube-api-access-j9rhv" (OuterVolumeSpecName: "kube-api-access-j9rhv") pod "3f718b5d-3eed-45b7-a6eb-a63797e882d3" (UID: "3f718b5d-3eed-45b7-a6eb-a63797e882d3"). InnerVolumeSpecName "kube-api-access-j9rhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.657602 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f718b5d-3eed-45b7-a6eb-a63797e882d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:34 crc kubenswrapper[4704]: I0122 17:03:34.657651 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9rhv\" (UniqueName: \"kubernetes.io/projected/3f718b5d-3eed-45b7-a6eb-a63797e882d3-kube-api-access-j9rhv\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.032260 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" event={"ID":"3f718b5d-3eed-45b7-a6eb-a63797e882d3","Type":"ContainerDied","Data":"a52c3a93100487f5f2edb8def55b9f100c97e8b0091454446a11c2e978be4740"} Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.032301 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a52c3a93100487f5f2edb8def55b9f100c97e8b0091454446a11c2e978be4740" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.032355 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-n9krw" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.045904 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerStarted","Data":"1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c"} Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.046135 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.487886 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.199807551 podStartE2EDuration="5.487869054s" podCreationTimestamp="2026-01-22 17:03:30 +0000 UTC" firstStartedPulling="2026-01-22 17:03:30.871485305 +0000 UTC m=+2103.516032005" lastFinishedPulling="2026-01-22 17:03:34.159546808 +0000 UTC m=+2106.804093508" observedRunningTime="2026-01-22 17:03:35.081898789 +0000 UTC m=+2107.726445499" watchObservedRunningTime="2026-01-22 17:03:35.487869054 +0000 UTC m=+2108.132415754" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.667460 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vdb9g"] Jan 22 17:03:35 crc kubenswrapper[4704]: E0122 17:03:35.667752 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f718b5d-3eed-45b7-a6eb-a63797e882d3" containerName="mariadb-account-create-update" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.667769 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f718b5d-3eed-45b7-a6eb-a63797e882d3" containerName="mariadb-account-create-update" Jan 22 17:03:35 crc kubenswrapper[4704]: E0122 17:03:35.667783 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa2a59d-955e-4b4f-8092-fa24ba640086" containerName="mariadb-database-create" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.667789 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa2a59d-955e-4b4f-8092-fa24ba640086" containerName="mariadb-database-create" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.667968 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa2a59d-955e-4b4f-8092-fa24ba640086" containerName="mariadb-database-create" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.667994 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f718b5d-3eed-45b7-a6eb-a63797e882d3" containerName="mariadb-account-create-update" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.669053 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.692438 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdb9g"] Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.875714 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-catalog-content\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.875871 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-utilities\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.876067 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvd8k\" (UniqueName: \"kubernetes.io/projected/dc4dce0b-d488-4d43-af9b-6ce5b92372da-kube-api-access-rvd8k\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.979102 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-utilities\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.979242 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvd8k\" (UniqueName: \"kubernetes.io/projected/dc4dce0b-d488-4d43-af9b-6ce5b92372da-kube-api-access-rvd8k\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.979364 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-catalog-content\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.979786 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-utilities\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:35 crc kubenswrapper[4704]: I0122 17:03:35.979912 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-catalog-content\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:35.998631 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvd8k\" (UniqueName: \"kubernetes.io/projected/dc4dce0b-d488-4d43-af9b-6ce5b92372da-kube-api-access-rvd8k\") pod \"redhat-marketplace-vdb9g\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.193279 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd"] Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.194526 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.199716 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.200341 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7skt4" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.210373 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd"] Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.289467 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.385488 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mwgn\" (UniqueName: \"kubernetes.io/projected/274b282d-041c-498e-93c4-d880467b21ce-kube-api-access-6mwgn\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.385774 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-config-data\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.386006 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-db-sync-config-data\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.386160 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.486868 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-db-sync-config-data\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.488115 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.488183 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mwgn\" (UniqueName: \"kubernetes.io/projected/274b282d-041c-498e-93c4-d880467b21ce-kube-api-access-6mwgn\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.488281 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-config-data\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.494688 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-db-sync-config-data\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.508398 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.509405 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-config-data\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.519238 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mwgn\" (UniqueName: \"kubernetes.io/projected/274b282d-041c-498e-93c4-d880467b21ce-kube-api-access-6mwgn\") pod \"watcher-kuttl-db-sync-r8fqd\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.579376 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdb9g"] Jan 22 17:03:36 crc kubenswrapper[4704]: I0122 17:03:36.815333 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.064245 4704 generic.go:334] "Generic (PLEG): container finished" podID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerID="eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746" exitCode=0 Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.064294 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdb9g" event={"ID":"dc4dce0b-d488-4d43-af9b-6ce5b92372da","Type":"ContainerDied","Data":"eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746"} Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.064319 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdb9g" event={"ID":"dc4dce0b-d488-4d43-af9b-6ce5b92372da","Type":"ContainerStarted","Data":"0299bdeaa5c91ed98ff2f9f036bc63c680151e547b9db420b8fc5e58b4fe71a5"} Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.316645 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd"] Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.869927 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pmnl8"] Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.874825 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.887049 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmnl8"] Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.915639 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-catalog-content\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.915757 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm42\" (UniqueName: \"kubernetes.io/projected/48fb0ce2-d18c-4672-ae25-974c1325bbcc-kube-api-access-rkm42\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:37 crc kubenswrapper[4704]: I0122 17:03:37.915812 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-utilities\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.016814 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm42\" (UniqueName: \"kubernetes.io/projected/48fb0ce2-d18c-4672-ae25-974c1325bbcc-kube-api-access-rkm42\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.016908 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-utilities\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.016965 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-catalog-content\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.017496 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-utilities\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.017518 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-catalog-content\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.034701 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm42\" (UniqueName: \"kubernetes.io/projected/48fb0ce2-d18c-4672-ae25-974c1325bbcc-kube-api-access-rkm42\") pod \"redhat-operators-pmnl8\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.073163 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" event={"ID":"274b282d-041c-498e-93c4-d880467b21ce","Type":"ContainerStarted","Data":"f711238096660cbdbbe2b05cc7adedaa250749d51af6f727dac170a409ebc75d"} Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.073205 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" event={"ID":"274b282d-041c-498e-93c4-d880467b21ce","Type":"ContainerStarted","Data":"08e066a2deba08c5f09b47d264fec5452f8dcf11001b7143365b467153a71ba7"} Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.075310 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdb9g" event={"ID":"dc4dce0b-d488-4d43-af9b-6ce5b92372da","Type":"ContainerStarted","Data":"bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1"} Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.088417 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" podStartSLOduration=2.088399795 podStartE2EDuration="2.088399795s" podCreationTimestamp="2026-01-22 17:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:38.086678206 +0000 UTC m=+2110.731224906" watchObservedRunningTime="2026-01-22 17:03:38.088399795 +0000 UTC m=+2110.732946495" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.188170 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.701514 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmnl8"] Jan 22 17:03:38 crc kubenswrapper[4704]: W0122 17:03:38.706258 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48fb0ce2_d18c_4672_ae25_974c1325bbcc.slice/crio-e3dd8a70b1fff4daa41769ec9af688f699e20a86ae014d30f9452941abc1bbc4 WatchSource:0}: Error finding container e3dd8a70b1fff4daa41769ec9af688f699e20a86ae014d30f9452941abc1bbc4: Status 404 returned error can't find the container with id e3dd8a70b1fff4daa41769ec9af688f699e20a86ae014d30f9452941abc1bbc4 Jan 22 17:03:38 crc kubenswrapper[4704]: I0122 17:03:38.887412 4704 scope.go:117] "RemoveContainer" containerID="eec2e8ba32acceee20ec26951d704a25b0f7f58fdb2f5b10d7b4d32fa6e371c1" Jan 22 17:03:39 crc kubenswrapper[4704]: I0122 17:03:39.088150 4704 generic.go:334] "Generic (PLEG): container finished" podID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerID="7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062" exitCode=0 Jan 22 17:03:39 crc kubenswrapper[4704]: I0122 17:03:39.088443 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmnl8" event={"ID":"48fb0ce2-d18c-4672-ae25-974c1325bbcc","Type":"ContainerDied","Data":"7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062"} Jan 22 17:03:39 crc kubenswrapper[4704]: I0122 17:03:39.088478 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmnl8" event={"ID":"48fb0ce2-d18c-4672-ae25-974c1325bbcc","Type":"ContainerStarted","Data":"e3dd8a70b1fff4daa41769ec9af688f699e20a86ae014d30f9452941abc1bbc4"} Jan 22 17:03:39 crc kubenswrapper[4704]: I0122 17:03:39.097685 4704 generic.go:334] "Generic (PLEG): container finished" podID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerID="bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1" exitCode=0 Jan 22 17:03:39 crc kubenswrapper[4704]: I0122 17:03:39.097721 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdb9g" event={"ID":"dc4dce0b-d488-4d43-af9b-6ce5b92372da","Type":"ContainerDied","Data":"bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1"} Jan 22 17:03:40 crc kubenswrapper[4704]: I0122 17:03:40.107544 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdb9g" event={"ID":"dc4dce0b-d488-4d43-af9b-6ce5b92372da","Type":"ContainerStarted","Data":"c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1"} Jan 22 17:03:40 crc kubenswrapper[4704]: I0122 17:03:40.109975 4704 generic.go:334] "Generic (PLEG): container finished" podID="274b282d-041c-498e-93c4-d880467b21ce" containerID="f711238096660cbdbbe2b05cc7adedaa250749d51af6f727dac170a409ebc75d" exitCode=0 Jan 22 17:03:40 crc kubenswrapper[4704]: I0122 17:03:40.110037 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" event={"ID":"274b282d-041c-498e-93c4-d880467b21ce","Type":"ContainerDied","Data":"f711238096660cbdbbe2b05cc7adedaa250749d51af6f727dac170a409ebc75d"} Jan 22 17:03:40 crc kubenswrapper[4704]: I0122 17:03:40.111503 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmnl8" event={"ID":"48fb0ce2-d18c-4672-ae25-974c1325bbcc","Type":"ContainerStarted","Data":"097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859"} Jan 22 17:03:40 crc kubenswrapper[4704]: I0122 17:03:40.129114 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vdb9g" podStartSLOduration=2.7071894739999998 podStartE2EDuration="5.129094778s" podCreationTimestamp="2026-01-22 17:03:35 +0000 UTC" firstStartedPulling="2026-01-22 17:03:37.066313996 +0000 UTC m=+2109.710860696" lastFinishedPulling="2026-01-22 17:03:39.4882193 +0000 UTC m=+2112.132766000" observedRunningTime="2026-01-22 17:03:40.123697805 +0000 UTC m=+2112.768244515" watchObservedRunningTime="2026-01-22 17:03:40.129094778 +0000 UTC m=+2112.773641468" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.477264 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.590319 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-db-sync-config-data\") pod \"274b282d-041c-498e-93c4-d880467b21ce\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.590423 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-config-data\") pod \"274b282d-041c-498e-93c4-d880467b21ce\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.590586 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-combined-ca-bundle\") pod \"274b282d-041c-498e-93c4-d880467b21ce\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.590767 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mwgn\" (UniqueName: \"kubernetes.io/projected/274b282d-041c-498e-93c4-d880467b21ce-kube-api-access-6mwgn\") pod \"274b282d-041c-498e-93c4-d880467b21ce\" (UID: \"274b282d-041c-498e-93c4-d880467b21ce\") " Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.595651 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274b282d-041c-498e-93c4-d880467b21ce-kube-api-access-6mwgn" (OuterVolumeSpecName: "kube-api-access-6mwgn") pod "274b282d-041c-498e-93c4-d880467b21ce" (UID: "274b282d-041c-498e-93c4-d880467b21ce"). InnerVolumeSpecName "kube-api-access-6mwgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.596140 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "274b282d-041c-498e-93c4-d880467b21ce" (UID: "274b282d-041c-498e-93c4-d880467b21ce"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.628747 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "274b282d-041c-498e-93c4-d880467b21ce" (UID: "274b282d-041c-498e-93c4-d880467b21ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.630097 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-config-data" (OuterVolumeSpecName: "config-data") pod "274b282d-041c-498e-93c4-d880467b21ce" (UID: "274b282d-041c-498e-93c4-d880467b21ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.698004 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.698046 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mwgn\" (UniqueName: \"kubernetes.io/projected/274b282d-041c-498e-93c4-d880467b21ce-kube-api-access-6mwgn\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.698060 4704 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:41 crc kubenswrapper[4704]: I0122 17:03:41.698071 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/274b282d-041c-498e-93c4-d880467b21ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.127763 4704 generic.go:334] "Generic (PLEG): container finished" podID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerID="097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859" exitCode=0 Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.127836 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmnl8" event={"ID":"48fb0ce2-d18c-4672-ae25-974c1325bbcc","Type":"ContainerDied","Data":"097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859"} Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.130151 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" event={"ID":"274b282d-041c-498e-93c4-d880467b21ce","Type":"ContainerDied","Data":"08e066a2deba08c5f09b47d264fec5452f8dcf11001b7143365b467153a71ba7"} Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.130175 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08e066a2deba08c5f09b47d264fec5452f8dcf11001b7143365b467153a71ba7" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.130212 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.453934 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:42 crc kubenswrapper[4704]: E0122 17:03:42.454329 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274b282d-041c-498e-93c4-d880467b21ce" containerName="watcher-kuttl-db-sync" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.454394 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="274b282d-041c-498e-93c4-d880467b21ce" containerName="watcher-kuttl-db-sync" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.454605 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="274b282d-041c-498e-93c4-d880467b21ce" containerName="watcher-kuttl-db-sync" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.455527 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.457726 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.457731 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7skt4" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.472671 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.474563 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.480776 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.490365 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.491913 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.503595 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509607 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509671 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509709 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2af8130-e779-48b7-9eb2-fa1c2f709020-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509765 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509819 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509845 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509869 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mc8d\" (UniqueName: \"kubernetes.io/projected/f2af8130-e779-48b7-9eb2-fa1c2f709020-kube-api-access-7mc8d\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509914 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b866f01a-a70c-4f93-b005-3661f5a1be3c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509936 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.509963 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8s8\" (UniqueName: \"kubernetes.io/projected/b866f01a-a70c-4f93-b005-3661f5a1be3c-kube-api-access-xp8s8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510041 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510065 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510093 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510113 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23173b76-e787-4014-bf87-f8d0f76483c8-logs\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510155 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510177 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510201 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4fv\" (UniqueName: \"kubernetes.io/projected/23173b76-e787-4014-bf87-f8d0f76483c8-kube-api-access-7b4fv\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.510226 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.524764 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.567224 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.583353 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.585041 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.586916 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.601626 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.612883 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de55292b-e231-4674-b0c5-635bb5ca45d0-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.612948 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.612972 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.612997 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613013 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23173b76-e787-4014-bf87-f8d0f76483c8-logs\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613045 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613063 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613084 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4fv\" (UniqueName: \"kubernetes.io/projected/23173b76-e787-4014-bf87-f8d0f76483c8-kube-api-access-7b4fv\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613105 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613128 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613151 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613221 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2af8130-e779-48b7-9eb2-fa1c2f709020-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613253 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.613302 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614130 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23173b76-e787-4014-bf87-f8d0f76483c8-logs\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614404 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2af8130-e779-48b7-9eb2-fa1c2f709020-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614633 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614675 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614710 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614736 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mc8d\" (UniqueName: \"kubernetes.io/projected/f2af8130-e779-48b7-9eb2-fa1c2f709020-kube-api-access-7mc8d\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614768 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614831 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vft94\" (UniqueName: \"kubernetes.io/projected/de55292b-e231-4674-b0c5-635bb5ca45d0-kube-api-access-vft94\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614863 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b866f01a-a70c-4f93-b005-3661f5a1be3c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614892 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.614921 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8s8\" (UniqueName: \"kubernetes.io/projected/b866f01a-a70c-4f93-b005-3661f5a1be3c-kube-api-access-xp8s8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.616225 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b866f01a-a70c-4f93-b005-3661f5a1be3c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.618067 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.618810 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.619039 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.619982 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.620146 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.620568 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.622048 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.622459 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.622928 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.626413 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.635426 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.636520 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.654369 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mc8d\" (UniqueName: \"kubernetes.io/projected/f2af8130-e779-48b7-9eb2-fa1c2f709020-kube-api-access-7mc8d\") pod \"watcher-kuttl-api-0\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.654517 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8s8\" (UniqueName: \"kubernetes.io/projected/b866f01a-a70c-4f93-b005-3661f5a1be3c-kube-api-access-xp8s8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.656288 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4fv\" (UniqueName: \"kubernetes.io/projected/23173b76-e787-4014-bf87-f8d0f76483c8-kube-api-access-7b4fv\") pod \"watcher-kuttl-api-1\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.716927 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.717247 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.717361 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.717462 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vft94\" (UniqueName: \"kubernetes.io/projected/de55292b-e231-4674-b0c5-635bb5ca45d0-kube-api-access-vft94\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.717609 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de55292b-e231-4674-b0c5-635bb5ca45d0-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.718008 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de55292b-e231-4674-b0c5-635bb5ca45d0-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.720997 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.721483 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.721855 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.736577 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vft94\" (UniqueName: \"kubernetes.io/projected/de55292b-e231-4674-b0c5-635bb5ca45d0-kube-api-access-vft94\") pod \"watcher-kuttl-applier-0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.805428 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.817856 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.829973 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:42 crc kubenswrapper[4704]: I0122 17:03:42.901901 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:43 crc kubenswrapper[4704]: I0122 17:03:43.143898 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmnl8" event={"ID":"48fb0ce2-d18c-4672-ae25-974c1325bbcc","Type":"ContainerStarted","Data":"a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4"} Jan 22 17:03:43 crc kubenswrapper[4704]: I0122 17:03:43.336013 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pmnl8" podStartSLOduration=2.794519418 podStartE2EDuration="6.335983918s" podCreationTimestamp="2026-01-22 17:03:37 +0000 UTC" firstStartedPulling="2026-01-22 17:03:39.097105306 +0000 UTC m=+2111.741652006" lastFinishedPulling="2026-01-22 17:03:42.638569806 +0000 UTC m=+2115.283116506" observedRunningTime="2026-01-22 17:03:43.168629761 +0000 UTC m=+2115.813176461" watchObservedRunningTime="2026-01-22 17:03:43.335983918 +0000 UTC m=+2115.980530628" Jan 22 17:03:43 crc kubenswrapper[4704]: I0122 17:03:43.342166 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:03:43 crc kubenswrapper[4704]: W0122 17:03:43.353180 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2af8130_e779_48b7_9eb2_fa1c2f709020.slice/crio-83ccaf0d7185fb02ceb84480d49ef7e056dcd6b76e3bb4b4c851e6d2ce308014 WatchSource:0}: Error finding container 83ccaf0d7185fb02ceb84480d49ef7e056dcd6b76e3bb4b4c851e6d2ce308014: Status 404 returned error can't find the container with id 83ccaf0d7185fb02ceb84480d49ef7e056dcd6b76e3bb4b4c851e6d2ce308014 Jan 22 17:03:43 crc kubenswrapper[4704]: I0122 17:03:43.356868 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:03:43 crc kubenswrapper[4704]: I0122 17:03:43.380053 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:03:43 crc kubenswrapper[4704]: W0122 17:03:43.388294 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb866f01a_a70c_4f93_b005_3661f5a1be3c.slice/crio-b04017efaec0ce464f7b50419996af91c5b621b1174b7251b6b39936b7dcc117 WatchSource:0}: Error finding container b04017efaec0ce464f7b50419996af91c5b621b1174b7251b6b39936b7dcc117: Status 404 returned error can't find the container with id b04017efaec0ce464f7b50419996af91c5b621b1174b7251b6b39936b7dcc117 Jan 22 17:03:43 crc kubenswrapper[4704]: I0122 17:03:43.539977 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.156836 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"23173b76-e787-4014-bf87-f8d0f76483c8","Type":"ContainerStarted","Data":"dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.157119 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"23173b76-e787-4014-bf87-f8d0f76483c8","Type":"ContainerStarted","Data":"071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.157139 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"23173b76-e787-4014-bf87-f8d0f76483c8","Type":"ContainerStarted","Data":"fb631780b607cb45e9339d9a64bf6094ad451763ffa43e088e85628e9c7b07bd"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.157509 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.159582 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"de55292b-e231-4674-b0c5-635bb5ca45d0","Type":"ContainerStarted","Data":"63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.159749 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"de55292b-e231-4674-b0c5-635bb5ca45d0","Type":"ContainerStarted","Data":"e4319795cc11f451845747b547c3c2fd6ce750a8dd15be5a0edf4d6ea86e16f7"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.162082 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b866f01a-a70c-4f93-b005-3661f5a1be3c","Type":"ContainerStarted","Data":"691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.162148 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b866f01a-a70c-4f93-b005-3661f5a1be3c","Type":"ContainerStarted","Data":"b04017efaec0ce464f7b50419996af91c5b621b1174b7251b6b39936b7dcc117"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.164546 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2af8130-e779-48b7-9eb2-fa1c2f709020","Type":"ContainerStarted","Data":"38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.164593 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2af8130-e779-48b7-9eb2-fa1c2f709020","Type":"ContainerStarted","Data":"93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.164606 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2af8130-e779-48b7-9eb2-fa1c2f709020","Type":"ContainerStarted","Data":"83ccaf0d7185fb02ceb84480d49ef7e056dcd6b76e3bb4b4c851e6d2ce308014"} Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.164878 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.166155 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.229:9322/\": dial tcp 10.217.0.229:9322: connect: connection refused" Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.177207 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.177187468 podStartE2EDuration="2.177187468s" podCreationTimestamp="2026-01-22 17:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:44.173695219 +0000 UTC m=+2116.818241919" watchObservedRunningTime="2026-01-22 17:03:44.177187468 +0000 UTC m=+2116.821734168" Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.201387 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.201364064 podStartE2EDuration="2.201364064s" podCreationTimestamp="2026-01-22 17:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:44.199273304 +0000 UTC m=+2116.843820004" watchObservedRunningTime="2026-01-22 17:03:44.201364064 +0000 UTC m=+2116.845910764" Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.224072 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.224051437 podStartE2EDuration="2.224051437s" podCreationTimestamp="2026-01-22 17:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:44.217490001 +0000 UTC m=+2116.862036701" watchObservedRunningTime="2026-01-22 17:03:44.224051437 +0000 UTC m=+2116.868598137" Jan 22 17:03:44 crc kubenswrapper[4704]: I0122 17:03:44.240839 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.240820323 podStartE2EDuration="2.240820323s" podCreationTimestamp="2026-01-22 17:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:03:44.232109526 +0000 UTC m=+2116.876656226" watchObservedRunningTime="2026-01-22 17:03:44.240820323 +0000 UTC m=+2116.885367033" Jan 22 17:03:46 crc kubenswrapper[4704]: I0122 17:03:46.290549 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:46 crc kubenswrapper[4704]: I0122 17:03:46.291069 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:46 crc kubenswrapper[4704]: I0122 17:03:46.361095 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:46 crc kubenswrapper[4704]: I0122 17:03:46.784539 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:47 crc kubenswrapper[4704]: I0122 17:03:47.261403 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:47 crc kubenswrapper[4704]: I0122 17:03:47.553063 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:47 crc kubenswrapper[4704]: I0122 17:03:47.806537 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:47 crc kubenswrapper[4704]: I0122 17:03:47.818371 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:47 crc kubenswrapper[4704]: I0122 17:03:47.902674 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:48 crc kubenswrapper[4704]: I0122 17:03:48.188884 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:48 crc kubenswrapper[4704]: I0122 17:03:48.188961 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:48 crc kubenswrapper[4704]: I0122 17:03:48.454387 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdb9g"] Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.086407 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.086481 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.230499 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vdb9g" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="registry-server" containerID="cri-o://c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1" gracePeriod=2 Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.244832 4704 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pmnl8" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="registry-server" probeResult="failure" output=< Jan 22 17:03:49 crc kubenswrapper[4704]: timeout: failed to connect service ":50051" within 1s Jan 22 17:03:49 crc kubenswrapper[4704]: > Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.773074 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.859978 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-utilities\") pod \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.860164 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvd8k\" (UniqueName: \"kubernetes.io/projected/dc4dce0b-d488-4d43-af9b-6ce5b92372da-kube-api-access-rvd8k\") pod \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.860205 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-catalog-content\") pod \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\" (UID: \"dc4dce0b-d488-4d43-af9b-6ce5b92372da\") " Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.860830 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-utilities" (OuterVolumeSpecName: "utilities") pod "dc4dce0b-d488-4d43-af9b-6ce5b92372da" (UID: "dc4dce0b-d488-4d43-af9b-6ce5b92372da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.868428 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4dce0b-d488-4d43-af9b-6ce5b92372da-kube-api-access-rvd8k" (OuterVolumeSpecName: "kube-api-access-rvd8k") pod "dc4dce0b-d488-4d43-af9b-6ce5b92372da" (UID: "dc4dce0b-d488-4d43-af9b-6ce5b92372da"). InnerVolumeSpecName "kube-api-access-rvd8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.878071 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc4dce0b-d488-4d43-af9b-6ce5b92372da" (UID: "dc4dce0b-d488-4d43-af9b-6ce5b92372da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.961734 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvd8k\" (UniqueName: \"kubernetes.io/projected/dc4dce0b-d488-4d43-af9b-6ce5b92372da-kube-api-access-rvd8k\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.961778 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:49 crc kubenswrapper[4704]: I0122 17:03:49.961787 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc4dce0b-d488-4d43-af9b-6ce5b92372da-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.241237 4704 generic.go:334] "Generic (PLEG): container finished" podID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerID="c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1" exitCode=0 Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.241276 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdb9g" event={"ID":"dc4dce0b-d488-4d43-af9b-6ce5b92372da","Type":"ContainerDied","Data":"c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1"} Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.241299 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdb9g" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.241312 4704 scope.go:117] "RemoveContainer" containerID="c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.241302 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdb9g" event={"ID":"dc4dce0b-d488-4d43-af9b-6ce5b92372da","Type":"ContainerDied","Data":"0299bdeaa5c91ed98ff2f9f036bc63c680151e547b9db420b8fc5e58b4fe71a5"} Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.264321 4704 scope.go:117] "RemoveContainer" containerID="bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.275007 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdb9g"] Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.284871 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdb9g"] Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.316737 4704 scope.go:117] "RemoveContainer" containerID="eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.332937 4704 scope.go:117] "RemoveContainer" containerID="c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1" Jan 22 17:03:50 crc kubenswrapper[4704]: E0122 17:03:50.333406 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1\": container with ID starting with c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1 not found: ID does not exist" containerID="c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.333536 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1"} err="failed to get container status \"c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1\": rpc error: code = NotFound desc = could not find container \"c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1\": container with ID starting with c9abe585d582a2314c78bfc4e84a6b79670fad3c84930a02d997a4c253984ed1 not found: ID does not exist" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.333621 4704 scope.go:117] "RemoveContainer" containerID="bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1" Jan 22 17:03:50 crc kubenswrapper[4704]: E0122 17:03:50.334135 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1\": container with ID starting with bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1 not found: ID does not exist" containerID="bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.334175 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1"} err="failed to get container status \"bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1\": rpc error: code = NotFound desc = could not find container \"bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1\": container with ID starting with bebe65617777d26e1b479d774d342e112b72a0b8d605548bbf72a631689568e1 not found: ID does not exist" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.334205 4704 scope.go:117] "RemoveContainer" containerID="eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746" Jan 22 17:03:50 crc kubenswrapper[4704]: E0122 17:03:50.334433 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746\": container with ID starting with eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746 not found: ID does not exist" containerID="eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746" Jan 22 17:03:50 crc kubenswrapper[4704]: I0122 17:03:50.334463 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746"} err="failed to get container status \"eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746\": rpc error: code = NotFound desc = could not find container \"eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746\": container with ID starting with eeefa546d87980688d24cabae4acd5771fef9df13b40502de0dc4d9a71853746 not found: ID does not exist" Jan 22 17:03:51 crc kubenswrapper[4704]: I0122 17:03:51.641126 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" path="/var/lib/kubelet/pods/dc4dce0b-d488-4d43-af9b-6ce5b92372da/volumes" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.806223 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.818418 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.818465 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.830129 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.861327 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.868542 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.902467 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:52 crc kubenswrapper[4704]: I0122 17:03:52.938240 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:53 crc kubenswrapper[4704]: I0122 17:03:53.269542 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:53 crc kubenswrapper[4704]: I0122 17:03:53.271979 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:03:53 crc kubenswrapper[4704]: I0122 17:03:53.278978 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:03:53 crc kubenswrapper[4704]: I0122 17:03:53.319911 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:03:53 crc kubenswrapper[4704]: I0122 17:03:53.341400 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:03:55 crc kubenswrapper[4704]: I0122 17:03:55.367377 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:55 crc kubenswrapper[4704]: I0122 17:03:55.367980 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-central-agent" containerID="cri-o://ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3" gracePeriod=30 Jan 22 17:03:55 crc kubenswrapper[4704]: I0122 17:03:55.368116 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="proxy-httpd" containerID="cri-o://1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c" gracePeriod=30 Jan 22 17:03:55 crc kubenswrapper[4704]: I0122 17:03:55.368158 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="sg-core" containerID="cri-o://de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68" gracePeriod=30 Jan 22 17:03:55 crc kubenswrapper[4704]: I0122 17:03:55.368187 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-notification-agent" containerID="cri-o://ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779" gracePeriod=30 Jan 22 17:03:55 crc kubenswrapper[4704]: I0122 17:03:55.373960 4704 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.223:3000/\": EOF" Jan 22 17:03:56 crc kubenswrapper[4704]: I0122 17:03:56.314046 4704 generic.go:334] "Generic (PLEG): container finished" podID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerID="1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c" exitCode=0 Jan 22 17:03:56 crc kubenswrapper[4704]: I0122 17:03:56.314327 4704 generic.go:334] "Generic (PLEG): container finished" podID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerID="de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68" exitCode=2 Jan 22 17:03:56 crc kubenswrapper[4704]: I0122 17:03:56.314428 4704 generic.go:334] "Generic (PLEG): container finished" podID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerID="ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3" exitCode=0 Jan 22 17:03:56 crc kubenswrapper[4704]: I0122 17:03:56.314140 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerDied","Data":"1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c"} Jan 22 17:03:56 crc kubenswrapper[4704]: I0122 17:03:56.314621 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerDied","Data":"de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68"} Jan 22 17:03:56 crc kubenswrapper[4704]: I0122 17:03:56.314709 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerDied","Data":"ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3"} Jan 22 17:03:58 crc kubenswrapper[4704]: I0122 17:03:58.231536 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:58 crc kubenswrapper[4704]: I0122 17:03:58.292778 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.272208 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307031 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qfrh\" (UniqueName: \"kubernetes.io/projected/8109542d-f35c-4bf4-bbdf-70184e4ce35b-kube-api-access-9qfrh\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307456 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-ceilometer-tls-certs\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307538 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-log-httpd\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307571 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-run-httpd\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307609 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-config-data\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307646 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-scripts\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307680 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-sg-core-conf-yaml\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.307735 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-combined-ca-bundle\") pod \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\" (UID: \"8109542d-f35c-4bf4-bbdf-70184e4ce35b\") " Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.308914 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.313548 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8109542d-f35c-4bf4-bbdf-70184e4ce35b-kube-api-access-9qfrh" (OuterVolumeSpecName: "kube-api-access-9qfrh") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "kube-api-access-9qfrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.313588 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.319287 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-scripts" (OuterVolumeSpecName: "scripts") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.331546 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.346632 4704 generic.go:334] "Generic (PLEG): container finished" podID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerID="ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779" exitCode=0 Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.346673 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerDied","Data":"ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779"} Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.346697 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8109542d-f35c-4bf4-bbdf-70184e4ce35b","Type":"ContainerDied","Data":"ccb03aa3d00456bbb90435abab722fc710cfdb583372e09eca388bb8864f8f57"} Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.346712 4704 scope.go:117] "RemoveContainer" containerID="1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.346843 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.354106 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.383673 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.396822 4704 scope.go:117] "RemoveContainer" containerID="de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.405482 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-config-data" (OuterVolumeSpecName: "config-data") pod "8109542d-f35c-4bf4-bbdf-70184e4ce35b" (UID: "8109542d-f35c-4bf4-bbdf-70184e4ce35b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409395 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409425 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409437 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409447 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qfrh\" (UniqueName: \"kubernetes.io/projected/8109542d-f35c-4bf4-bbdf-70184e4ce35b-kube-api-access-9qfrh\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409456 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409465 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409473 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8109542d-f35c-4bf4-bbdf-70184e4ce35b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.409481 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8109542d-f35c-4bf4-bbdf-70184e4ce35b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.415947 4704 scope.go:117] "RemoveContainer" containerID="ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.430972 4704 scope.go:117] "RemoveContainer" containerID="ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.449377 4704 scope.go:117] "RemoveContainer" containerID="1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.453055 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c\": container with ID starting with 1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c not found: ID does not exist" containerID="1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.453096 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c"} err="failed to get container status \"1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c\": rpc error: code = NotFound desc = could not find container \"1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c\": container with ID starting with 1fef51c985fcd268a8c655401e9f2cf8db3c4ae3d9cf37320eaba6bd7e3e884c not found: ID does not exist" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.453120 4704 scope.go:117] "RemoveContainer" containerID="de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.453377 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68\": container with ID starting with de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68 not found: ID does not exist" containerID="de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.453417 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68"} err="failed to get container status \"de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68\": rpc error: code = NotFound desc = could not find container \"de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68\": container with ID starting with de714d1c8531841f0ee8001c701561e0cec2a7b3551c160b141ec0df732b9a68 not found: ID does not exist" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.453446 4704 scope.go:117] "RemoveContainer" containerID="ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.453850 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779\": container with ID starting with ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779 not found: ID does not exist" containerID="ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.453877 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779"} err="failed to get container status \"ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779\": rpc error: code = NotFound desc = could not find container \"ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779\": container with ID starting with ad67c5b457f0f1c9f08259804a607eba75e32dd7fc5efe2c0632d6ec66fc8779 not found: ID does not exist" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.453895 4704 scope.go:117] "RemoveContainer" containerID="ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.454186 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3\": container with ID starting with ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3 not found: ID does not exist" containerID="ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.454212 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3"} err="failed to get container status \"ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3\": rpc error: code = NotFound desc = could not find container \"ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3\": container with ID starting with ca1c876129b893391a6afc0ea8487b278923298bd96d20c2826cfc89dee560f3 not found: ID does not exist" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.699748 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.707656 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720184 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.720547 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="extract-utilities" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720570 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="extract-utilities" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.720591 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="extract-content" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720601 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="extract-content" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.720629 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="sg-core" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720637 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="sg-core" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.720657 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="proxy-httpd" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720664 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="proxy-httpd" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.720676 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-notification-agent" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720685 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-notification-agent" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.720701 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="registry-server" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720709 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="registry-server" Jan 22 17:03:59 crc kubenswrapper[4704]: E0122 17:03:59.720719 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-central-agent" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720727 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-central-agent" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720943 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-notification-agent" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.720959 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4dce0b-d488-4d43-af9b-6ce5b92372da" containerName="registry-server" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.722512 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="sg-core" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.722537 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="proxy-httpd" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.722566 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" containerName="ceilometer-central-agent" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.725116 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.729247 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.729392 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.730467 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.776469 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.815632 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-run-httpd\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.815742 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-scripts\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.816203 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.816304 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-config-data\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.816340 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-log-httpd\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.816423 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.816515 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.816559 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqshn\" (UniqueName: \"kubernetes.io/projected/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-kube-api-access-sqshn\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918417 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-log-httpd\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918468 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918497 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918518 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqshn\" (UniqueName: \"kubernetes.io/projected/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-kube-api-access-sqshn\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918541 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-run-httpd\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918595 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-scripts\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918651 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.918674 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-config-data\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.919398 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-log-httpd\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.920093 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-run-httpd\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.923764 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.923789 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-scripts\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.923809 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.924997 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-config-data\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.933729 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:03:59 crc kubenswrapper[4704]: I0122 17:03:59.936681 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqshn\" (UniqueName: \"kubernetes.io/projected/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-kube-api-access-sqshn\") pod \"ceilometer-0\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.040419 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.142572 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq"] Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.144601 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.147331 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.147556 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.164362 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq"] Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.225959 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.226101 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-scripts-volume\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.226215 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-config-data\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.226235 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddgj\" (UniqueName: \"kubernetes.io/projected/f37b7493-2791-4b6b-9779-acdd33467b42-kube-api-access-qddgj\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.327517 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-scripts-volume\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.329023 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-config-data\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.329149 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddgj\" (UniqueName: \"kubernetes.io/projected/f37b7493-2791-4b6b-9779-acdd33467b42-kube-api-access-qddgj\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.329858 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.331828 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-scripts-volume\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.332093 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-config-data\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.333746 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.361642 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddgj\" (UniqueName: \"kubernetes.io/projected/f37b7493-2791-4b6b-9779-acdd33467b42-kube-api-access-qddgj\") pod \"watcher-kuttl-db-purge-29485024-t2gzq\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.486780 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.559705 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:04:00 crc kubenswrapper[4704]: W0122 17:04:00.568611 4704 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b86cf71_6d88_43c9_a1d9_94aee9ee4b61.slice/crio-01d959037ba021a0b25841a1e742d215d966db96d75b2515c261db29fbdf7273 WatchSource:0}: Error finding container 01d959037ba021a0b25841a1e742d215d966db96d75b2515c261db29fbdf7273: Status 404 returned error can't find the container with id 01d959037ba021a0b25841a1e742d215d966db96d75b2515c261db29fbdf7273 Jan 22 17:04:00 crc kubenswrapper[4704]: I0122 17:04:00.969362 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq"] Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.387699 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerStarted","Data":"66f5ade15a8ec877cbffc7dad344990c4dc9c7952e4efc0c7b39474f1f6d50a6"} Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.388076 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerStarted","Data":"01d959037ba021a0b25841a1e742d215d966db96d75b2515c261db29fbdf7273"} Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.390749 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" event={"ID":"f37b7493-2791-4b6b-9779-acdd33467b42","Type":"ContainerStarted","Data":"25bb71dfef71ccbfc2c9c4a48e8240524db51f868c8fdecd00186a9a5fa2ab65"} Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.390804 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" event={"ID":"f37b7493-2791-4b6b-9779-acdd33467b42","Type":"ContainerStarted","Data":"6badabd17facc069ff1559d5b1dda44bc84d812e322996f0c5a7b5a959c90b7f"} Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.424432 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" podStartSLOduration=1.424410107 podStartE2EDuration="1.424410107s" podCreationTimestamp="2026-01-22 17:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 17:04:01.415499534 +0000 UTC m=+2134.060046244" watchObservedRunningTime="2026-01-22 17:04:01.424410107 +0000 UTC m=+2134.068956807" Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.642356 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8109542d-f35c-4bf4-bbdf-70184e4ce35b" path="/var/lib/kubelet/pods/8109542d-f35c-4bf4-bbdf-70184e4ce35b/volumes" Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.856862 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmnl8"] Jan 22 17:04:01 crc kubenswrapper[4704]: I0122 17:04:01.857123 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pmnl8" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="registry-server" containerID="cri-o://a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4" gracePeriod=2 Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.359572 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.402270 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerStarted","Data":"c8d796cdfb6d7a46a243a6a48d0b846c4ff51bc5e2c86aa71a4494b7d567cdce"} Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.405553 4704 generic.go:334] "Generic (PLEG): container finished" podID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerID="a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4" exitCode=0 Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.406498 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmnl8" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.407031 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmnl8" event={"ID":"48fb0ce2-d18c-4672-ae25-974c1325bbcc","Type":"ContainerDied","Data":"a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4"} Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.407064 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmnl8" event={"ID":"48fb0ce2-d18c-4672-ae25-974c1325bbcc","Type":"ContainerDied","Data":"e3dd8a70b1fff4daa41769ec9af688f699e20a86ae014d30f9452941abc1bbc4"} Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.407086 4704 scope.go:117] "RemoveContainer" containerID="a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.430944 4704 scope.go:117] "RemoveContainer" containerID="097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.464341 4704 scope.go:117] "RemoveContainer" containerID="7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.470351 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-utilities\") pod \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.470635 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkm42\" (UniqueName: \"kubernetes.io/projected/48fb0ce2-d18c-4672-ae25-974c1325bbcc-kube-api-access-rkm42\") pod \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.470784 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-catalog-content\") pod \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\" (UID: \"48fb0ce2-d18c-4672-ae25-974c1325bbcc\") " Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.471176 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-utilities" (OuterVolumeSpecName: "utilities") pod "48fb0ce2-d18c-4672-ae25-974c1325bbcc" (UID: "48fb0ce2-d18c-4672-ae25-974c1325bbcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.474813 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fb0ce2-d18c-4672-ae25-974c1325bbcc-kube-api-access-rkm42" (OuterVolumeSpecName: "kube-api-access-rkm42") pod "48fb0ce2-d18c-4672-ae25-974c1325bbcc" (UID: "48fb0ce2-d18c-4672-ae25-974c1325bbcc"). InnerVolumeSpecName "kube-api-access-rkm42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.496228 4704 scope.go:117] "RemoveContainer" containerID="a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4" Jan 22 17:04:02 crc kubenswrapper[4704]: E0122 17:04:02.496613 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4\": container with ID starting with a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4 not found: ID does not exist" containerID="a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.496640 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4"} err="failed to get container status \"a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4\": rpc error: code = NotFound desc = could not find container \"a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4\": container with ID starting with a68f2c0dc3d04461ef40950bd197b88e53e45f922d4f7acaf9b04650ea4370a4 not found: ID does not exist" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.496658 4704 scope.go:117] "RemoveContainer" containerID="097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859" Jan 22 17:04:02 crc kubenswrapper[4704]: E0122 17:04:02.497074 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859\": container with ID starting with 097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859 not found: ID does not exist" containerID="097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.497095 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859"} err="failed to get container status \"097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859\": rpc error: code = NotFound desc = could not find container \"097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859\": container with ID starting with 097e474f7b7e721a4feabd4b722832af7f519c1266f7d89b067cc216c87a8859 not found: ID does not exist" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.497107 4704 scope.go:117] "RemoveContainer" containerID="7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062" Jan 22 17:04:02 crc kubenswrapper[4704]: E0122 17:04:02.497575 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062\": container with ID starting with 7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062 not found: ID does not exist" containerID="7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.497607 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062"} err="failed to get container status \"7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062\": rpc error: code = NotFound desc = could not find container \"7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062\": container with ID starting with 7375f11ec688f4721a14f3fc0b44ce07ab1a9b59f60c10728d3d5956f08ac062 not found: ID does not exist" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.572879 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.572921 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkm42\" (UniqueName: \"kubernetes.io/projected/48fb0ce2-d18c-4672-ae25-974c1325bbcc-kube-api-access-rkm42\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.598950 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48fb0ce2-d18c-4672-ae25-974c1325bbcc" (UID: "48fb0ce2-d18c-4672-ae25-974c1325bbcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.675223 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48fb0ce2-d18c-4672-ae25-974c1325bbcc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.761460 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmnl8"] Jan 22 17:04:02 crc kubenswrapper[4704]: I0122 17:04:02.773769 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pmnl8"] Jan 22 17:04:03 crc kubenswrapper[4704]: I0122 17:04:03.428394 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerStarted","Data":"80ba83e990d481a5601993ffdb8982c86b29faa1d8d286aaaa4f356a3aff3efa"} Jan 22 17:04:03 crc kubenswrapper[4704]: I0122 17:04:03.651045 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" path="/var/lib/kubelet/pods/48fb0ce2-d18c-4672-ae25-974c1325bbcc/volumes" Jan 22 17:04:04 crc kubenswrapper[4704]: I0122 17:04:04.440645 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerStarted","Data":"d33b1efe1b665bf6e7f08d44e358a2502f501a0d2e4c11ecb1ee5e5b877a1132"} Jan 22 17:04:04 crc kubenswrapper[4704]: I0122 17:04:04.441101 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:04 crc kubenswrapper[4704]: I0122 17:04:04.443024 4704 generic.go:334] "Generic (PLEG): container finished" podID="f37b7493-2791-4b6b-9779-acdd33467b42" containerID="25bb71dfef71ccbfc2c9c4a48e8240524db51f868c8fdecd00186a9a5fa2ab65" exitCode=0 Jan 22 17:04:04 crc kubenswrapper[4704]: I0122 17:04:04.443072 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" event={"ID":"f37b7493-2791-4b6b-9779-acdd33467b42","Type":"ContainerDied","Data":"25bb71dfef71ccbfc2c9c4a48e8240524db51f868c8fdecd00186a9a5fa2ab65"} Jan 22 17:04:04 crc kubenswrapper[4704]: I0122 17:04:04.464323 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.250151954 podStartE2EDuration="5.46430771s" podCreationTimestamp="2026-01-22 17:03:59 +0000 UTC" firstStartedPulling="2026-01-22 17:04:00.570519657 +0000 UTC m=+2133.215066357" lastFinishedPulling="2026-01-22 17:04:03.784675413 +0000 UTC m=+2136.429222113" observedRunningTime="2026-01-22 17:04:04.462807877 +0000 UTC m=+2137.107354587" watchObservedRunningTime="2026-01-22 17:04:04.46430771 +0000 UTC m=+2137.108854410" Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.818757 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.938787 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qddgj\" (UniqueName: \"kubernetes.io/projected/f37b7493-2791-4b6b-9779-acdd33467b42-kube-api-access-qddgj\") pod \"f37b7493-2791-4b6b-9779-acdd33467b42\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.938911 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-config-data\") pod \"f37b7493-2791-4b6b-9779-acdd33467b42\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.939000 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-combined-ca-bundle\") pod \"f37b7493-2791-4b6b-9779-acdd33467b42\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.939077 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-scripts-volume\") pod \"f37b7493-2791-4b6b-9779-acdd33467b42\" (UID: \"f37b7493-2791-4b6b-9779-acdd33467b42\") " Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.943862 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "f37b7493-2791-4b6b-9779-acdd33467b42" (UID: "f37b7493-2791-4b6b-9779-acdd33467b42"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.943925 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37b7493-2791-4b6b-9779-acdd33467b42-kube-api-access-qddgj" (OuterVolumeSpecName: "kube-api-access-qddgj") pod "f37b7493-2791-4b6b-9779-acdd33467b42" (UID: "f37b7493-2791-4b6b-9779-acdd33467b42"). InnerVolumeSpecName "kube-api-access-qddgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.964878 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f37b7493-2791-4b6b-9779-acdd33467b42" (UID: "f37b7493-2791-4b6b-9779-acdd33467b42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:05 crc kubenswrapper[4704]: I0122 17:04:05.981560 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-config-data" (OuterVolumeSpecName: "config-data") pod "f37b7493-2791-4b6b-9779-acdd33467b42" (UID: "f37b7493-2791-4b6b-9779-acdd33467b42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:06 crc kubenswrapper[4704]: I0122 17:04:06.041026 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qddgj\" (UniqueName: \"kubernetes.io/projected/f37b7493-2791-4b6b-9779-acdd33467b42-kube-api-access-qddgj\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:06 crc kubenswrapper[4704]: I0122 17:04:06.041067 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:06 crc kubenswrapper[4704]: I0122 17:04:06.041077 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:06 crc kubenswrapper[4704]: I0122 17:04:06.041084 4704 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/f37b7493-2791-4b6b-9779-acdd33467b42-scripts-volume\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:06 crc kubenswrapper[4704]: I0122 17:04:06.458299 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" event={"ID":"f37b7493-2791-4b6b-9779-acdd33467b42","Type":"ContainerDied","Data":"6badabd17facc069ff1559d5b1dda44bc84d812e322996f0c5a7b5a959c90b7f"} Jan 22 17:04:06 crc kubenswrapper[4704]: I0122 17:04:06.458338 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6badabd17facc069ff1559d5b1dda44bc84d812e322996f0c5a7b5a959c90b7f" Jan 22 17:04:06 crc kubenswrapper[4704]: I0122 17:04:06.458345 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.297259 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.306550 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-r8fqd"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.312755 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.318663 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29485024-t2gzq"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.358252 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-kwz5c"] Jan 22 17:04:10 crc kubenswrapper[4704]: E0122 17:04:10.358539 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="extract-content" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.358550 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="extract-content" Jan 22 17:04:10 crc kubenswrapper[4704]: E0122 17:04:10.358581 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="registry-server" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.358587 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="registry-server" Jan 22 17:04:10 crc kubenswrapper[4704]: E0122 17:04:10.358597 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="extract-utilities" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.358605 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="extract-utilities" Jan 22 17:04:10 crc kubenswrapper[4704]: E0122 17:04:10.358615 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37b7493-2791-4b6b-9779-acdd33467b42" containerName="watcher-db-manage" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.358621 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37b7493-2791-4b6b-9779-acdd33467b42" containerName="watcher-db-manage" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.358754 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fb0ce2-d18c-4672-ae25-974c1325bbcc" containerName="registry-server" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.358770 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37b7493-2791-4b6b-9779-acdd33467b42" containerName="watcher-db-manage" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.359289 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.378387 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.378629 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="b866f01a-a70c-4f93-b005-3661f5a1be3c" containerName="watcher-decision-engine" containerID="cri-o://691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2" gracePeriod=30 Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.393636 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-kwz5c"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.433265 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.433509 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-kuttl-api-log" containerID="cri-o://071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec" gracePeriod=30 Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.433904 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-api" containerID="cri-o://dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2" gracePeriod=30 Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.435206 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vqr\" (UniqueName: \"kubernetes.io/projected/5e557ffc-1883-454b-bd9c-ba330ae4cbef-kube-api-access-s4vqr\") pod \"watchertest-account-delete-kwz5c\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.435248 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e557ffc-1883-454b-bd9c-ba330ae4cbef-operator-scripts\") pod \"watchertest-account-delete-kwz5c\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.452495 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.452763 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-kuttl-api-log" containerID="cri-o://93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec" gracePeriod=30 Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.453151 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-api" containerID="cri-o://38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb" gracePeriod=30 Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.469324 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.469573 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="de55292b-e231-4674-b0c5-635bb5ca45d0" containerName="watcher-applier" containerID="cri-o://63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" gracePeriod=30 Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.536633 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e557ffc-1883-454b-bd9c-ba330ae4cbef-operator-scripts\") pod \"watchertest-account-delete-kwz5c\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.536855 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vqr\" (UniqueName: \"kubernetes.io/projected/5e557ffc-1883-454b-bd9c-ba330ae4cbef-kube-api-access-s4vqr\") pod \"watchertest-account-delete-kwz5c\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.537968 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e557ffc-1883-454b-bd9c-ba330ae4cbef-operator-scripts\") pod \"watchertest-account-delete-kwz5c\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.571353 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vqr\" (UniqueName: \"kubernetes.io/projected/5e557ffc-1883-454b-bd9c-ba330ae4cbef-kube-api-access-s4vqr\") pod \"watchertest-account-delete-kwz5c\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:10 crc kubenswrapper[4704]: I0122 17:04:10.685298 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.033746 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-kwz5c"] Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.538820 4704 generic.go:334] "Generic (PLEG): container finished" podID="23173b76-e787-4014-bf87-f8d0f76483c8" containerID="071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec" exitCode=143 Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.538886 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"23173b76-e787-4014-bf87-f8d0f76483c8","Type":"ContainerDied","Data":"071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec"} Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.541310 4704 generic.go:334] "Generic (PLEG): container finished" podID="5e557ffc-1883-454b-bd9c-ba330ae4cbef" containerID="83d92b05303375edc90d3fcb8d7c5fd1bbfc564fab7ef6b439f7046828709a7b" exitCode=0 Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.541364 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" event={"ID":"5e557ffc-1883-454b-bd9c-ba330ae4cbef","Type":"ContainerDied","Data":"83d92b05303375edc90d3fcb8d7c5fd1bbfc564fab7ef6b439f7046828709a7b"} Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.541537 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" event={"ID":"5e557ffc-1883-454b-bd9c-ba330ae4cbef","Type":"ContainerStarted","Data":"48154123501c26f31521610fca5e566ecb941d3905daaba559d9e39fc5193b4e"} Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.543617 4704 generic.go:334] "Generic (PLEG): container finished" podID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerID="93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec" exitCode=143 Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.543662 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2af8130-e779-48b7-9eb2-fa1c2f709020","Type":"ContainerDied","Data":"93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec"} Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.645660 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274b282d-041c-498e-93c4-d880467b21ce" path="/var/lib/kubelet/pods/274b282d-041c-498e-93c4-d880467b21ce/volumes" Jan 22 17:04:11 crc kubenswrapper[4704]: I0122 17:04:11.646871 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37b7493-2791-4b6b-9779-acdd33467b42" path="/var/lib/kubelet/pods/f37b7493-2791-4b6b-9779-acdd33467b42/volumes" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.207482 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.248708 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.267546 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b4fv\" (UniqueName: \"kubernetes.io/projected/23173b76-e787-4014-bf87-f8d0f76483c8-kube-api-access-7b4fv\") pod \"23173b76-e787-4014-bf87-f8d0f76483c8\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.267914 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-custom-prometheus-ca\") pod \"23173b76-e787-4014-bf87-f8d0f76483c8\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.268131 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-config-data\") pod \"23173b76-e787-4014-bf87-f8d0f76483c8\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.268197 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23173b76-e787-4014-bf87-f8d0f76483c8-logs\") pod \"23173b76-e787-4014-bf87-f8d0f76483c8\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.268248 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-cert-memcached-mtls\") pod \"23173b76-e787-4014-bf87-f8d0f76483c8\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.268274 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-combined-ca-bundle\") pod \"23173b76-e787-4014-bf87-f8d0f76483c8\" (UID: \"23173b76-e787-4014-bf87-f8d0f76483c8\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.270603 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23173b76-e787-4014-bf87-f8d0f76483c8-logs" (OuterVolumeSpecName: "logs") pod "23173b76-e787-4014-bf87-f8d0f76483c8" (UID: "23173b76-e787-4014-bf87-f8d0f76483c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.291164 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23173b76-e787-4014-bf87-f8d0f76483c8-kube-api-access-7b4fv" (OuterVolumeSpecName: "kube-api-access-7b4fv") pod "23173b76-e787-4014-bf87-f8d0f76483c8" (UID: "23173b76-e787-4014-bf87-f8d0f76483c8"). InnerVolumeSpecName "kube-api-access-7b4fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.299075 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23173b76-e787-4014-bf87-f8d0f76483c8" (UID: "23173b76-e787-4014-bf87-f8d0f76483c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.339907 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "23173b76-e787-4014-bf87-f8d0f76483c8" (UID: "23173b76-e787-4014-bf87-f8d0f76483c8"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.357577 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-config-data" (OuterVolumeSpecName: "config-data") pod "23173b76-e787-4014-bf87-f8d0f76483c8" (UID: "23173b76-e787-4014-bf87-f8d0f76483c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369148 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-config-data\") pod \"f2af8130-e779-48b7-9eb2-fa1c2f709020\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369240 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-combined-ca-bundle\") pod \"f2af8130-e779-48b7-9eb2-fa1c2f709020\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369270 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-custom-prometheus-ca\") pod \"f2af8130-e779-48b7-9eb2-fa1c2f709020\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369340 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-cert-memcached-mtls\") pod \"f2af8130-e779-48b7-9eb2-fa1c2f709020\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369366 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mc8d\" (UniqueName: \"kubernetes.io/projected/f2af8130-e779-48b7-9eb2-fa1c2f709020-kube-api-access-7mc8d\") pod \"f2af8130-e779-48b7-9eb2-fa1c2f709020\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369383 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2af8130-e779-48b7-9eb2-fa1c2f709020-logs\") pod \"f2af8130-e779-48b7-9eb2-fa1c2f709020\" (UID: \"f2af8130-e779-48b7-9eb2-fa1c2f709020\") " Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369740 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369756 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b4fv\" (UniqueName: \"kubernetes.io/projected/23173b76-e787-4014-bf87-f8d0f76483c8-kube-api-access-7b4fv\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369766 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369776 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.369784 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23173b76-e787-4014-bf87-f8d0f76483c8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.370175 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2af8130-e779-48b7-9eb2-fa1c2f709020-logs" (OuterVolumeSpecName: "logs") pod "f2af8130-e779-48b7-9eb2-fa1c2f709020" (UID: "f2af8130-e779-48b7-9eb2-fa1c2f709020"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.373278 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2af8130-e779-48b7-9eb2-fa1c2f709020-kube-api-access-7mc8d" (OuterVolumeSpecName: "kube-api-access-7mc8d") pod "f2af8130-e779-48b7-9eb2-fa1c2f709020" (UID: "f2af8130-e779-48b7-9eb2-fa1c2f709020"). InnerVolumeSpecName "kube-api-access-7mc8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.380972 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "23173b76-e787-4014-bf87-f8d0f76483c8" (UID: "23173b76-e787-4014-bf87-f8d0f76483c8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.388572 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f2af8130-e779-48b7-9eb2-fa1c2f709020" (UID: "f2af8130-e779-48b7-9eb2-fa1c2f709020"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.394204 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2af8130-e779-48b7-9eb2-fa1c2f709020" (UID: "f2af8130-e779-48b7-9eb2-fa1c2f709020"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.410340 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-config-data" (OuterVolumeSpecName: "config-data") pod "f2af8130-e779-48b7-9eb2-fa1c2f709020" (UID: "f2af8130-e779-48b7-9eb2-fa1c2f709020"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.431319 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f2af8130-e779-48b7-9eb2-fa1c2f709020" (UID: "f2af8130-e779-48b7-9eb2-fa1c2f709020"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.471671 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.471704 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.471716 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.471725 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/23173b76-e787-4014-bf87-f8d0f76483c8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.471735 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2af8130-e779-48b7-9eb2-fa1c2f709020-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.471744 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mc8d\" (UniqueName: \"kubernetes.io/projected/f2af8130-e779-48b7-9eb2-fa1c2f709020-kube-api-access-7mc8d\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.471753 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2af8130-e779-48b7-9eb2-fa1c2f709020-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.556061 4704 generic.go:334] "Generic (PLEG): container finished" podID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerID="38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb" exitCode=0 Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.556150 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.556150 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2af8130-e779-48b7-9eb2-fa1c2f709020","Type":"ContainerDied","Data":"38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb"} Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.556696 4704 scope.go:117] "RemoveContainer" containerID="38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.556623 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2af8130-e779-48b7-9eb2-fa1c2f709020","Type":"ContainerDied","Data":"83ccaf0d7185fb02ceb84480d49ef7e056dcd6b76e3bb4b4c851e6d2ce308014"} Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.559243 4704 generic.go:334] "Generic (PLEG): container finished" podID="23173b76-e787-4014-bf87-f8d0f76483c8" containerID="dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2" exitCode=0 Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.559305 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.559351 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"23173b76-e787-4014-bf87-f8d0f76483c8","Type":"ContainerDied","Data":"dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2"} Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.559375 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"23173b76-e787-4014-bf87-f8d0f76483c8","Type":"ContainerDied","Data":"fb631780b607cb45e9339d9a64bf6094ad451763ffa43e088e85628e9c7b07bd"} Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.717842 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.726115 4704 scope.go:117] "RemoveContainer" containerID="93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.733461 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.743012 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.753605 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.757928 4704 scope.go:117] "RemoveContainer" containerID="38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb" Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.758366 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb\": container with ID starting with 38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb not found: ID does not exist" containerID="38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.758396 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb"} err="failed to get container status \"38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb\": rpc error: code = NotFound desc = could not find container \"38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb\": container with ID starting with 38240f0447f2e6026e4042d9b0a284f6803d6c91fa16fb63b1c98328e0d5cceb not found: ID does not exist" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.758417 4704 scope.go:117] "RemoveContainer" containerID="93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec" Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.758709 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec\": container with ID starting with 93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec not found: ID does not exist" containerID="93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.758731 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec"} err="failed to get container status \"93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec\": rpc error: code = NotFound desc = could not find container \"93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec\": container with ID starting with 93c94df5261c73d2c9f4b6680b5fde1b80c11b2bbe2004dc4a71a232505e50ec not found: ID does not exist" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.758744 4704 scope.go:117] "RemoveContainer" containerID="dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.782315 4704 scope.go:117] "RemoveContainer" containerID="071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.798266 4704 scope.go:117] "RemoveContainer" containerID="dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2" Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.800005 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2\": container with ID starting with dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2 not found: ID does not exist" containerID="dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.800054 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2"} err="failed to get container status \"dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2\": rpc error: code = NotFound desc = could not find container \"dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2\": container with ID starting with dfe34503e2f2d780afe11a1439fa5fa72a940c03e7e79d2c7558cea3c920f7e2 not found: ID does not exist" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.800085 4704 scope.go:117] "RemoveContainer" containerID="071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec" Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.800739 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec\": container with ID starting with 071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec not found: ID does not exist" containerID="071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec" Jan 22 17:04:12 crc kubenswrapper[4704]: I0122 17:04:12.800766 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec"} err="failed to get container status \"071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec\": rpc error: code = NotFound desc = could not find container \"071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec\": container with ID starting with 071b2d239d425ac87e88b068d3c7153165f0926d9dfcfbce788359e4ad9077ec not found: ID does not exist" Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.837491 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.845711 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.847919 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.847989 4704 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="b866f01a-a70c-4f93-b005-3661f5a1be3c" containerName="watcher-decision-engine" Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.905295 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd is running failed: container process not found" containerID="63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.905759 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd is running failed: container process not found" containerID="63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.908200 4704 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd is running failed: container process not found" containerID="63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 17:04:12 crc kubenswrapper[4704]: E0122 17:04:12.908255 4704 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd is running failed: container process not found" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="de55292b-e231-4674-b0c5-635bb5ca45d0" containerName="watcher-applier" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.034708 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.081660 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.084087 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e557ffc-1883-454b-bd9c-ba330ae4cbef-operator-scripts\") pod \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.084303 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4vqr\" (UniqueName: \"kubernetes.io/projected/5e557ffc-1883-454b-bd9c-ba330ae4cbef-kube-api-access-s4vqr\") pod \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\" (UID: \"5e557ffc-1883-454b-bd9c-ba330ae4cbef\") " Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.084873 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e557ffc-1883-454b-bd9c-ba330ae4cbef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e557ffc-1883-454b-bd9c-ba330ae4cbef" (UID: "5e557ffc-1883-454b-bd9c-ba330ae4cbef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.087099 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e557ffc-1883-454b-bd9c-ba330ae4cbef-kube-api-access-s4vqr" (OuterVolumeSpecName: "kube-api-access-s4vqr") pod "5e557ffc-1883-454b-bd9c-ba330ae4cbef" (UID: "5e557ffc-1883-454b-bd9c-ba330ae4cbef"). InnerVolumeSpecName "kube-api-access-s4vqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.186174 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-config-data\") pod \"de55292b-e231-4674-b0c5-635bb5ca45d0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.186266 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de55292b-e231-4674-b0c5-635bb5ca45d0-logs\") pod \"de55292b-e231-4674-b0c5-635bb5ca45d0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.186330 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vft94\" (UniqueName: \"kubernetes.io/projected/de55292b-e231-4674-b0c5-635bb5ca45d0-kube-api-access-vft94\") pod \"de55292b-e231-4674-b0c5-635bb5ca45d0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.186403 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-cert-memcached-mtls\") pod \"de55292b-e231-4674-b0c5-635bb5ca45d0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.186429 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-combined-ca-bundle\") pod \"de55292b-e231-4674-b0c5-635bb5ca45d0\" (UID: \"de55292b-e231-4674-b0c5-635bb5ca45d0\") " Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.186576 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de55292b-e231-4674-b0c5-635bb5ca45d0-logs" (OuterVolumeSpecName: "logs") pod "de55292b-e231-4674-b0c5-635bb5ca45d0" (UID: "de55292b-e231-4674-b0c5-635bb5ca45d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.187017 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de55292b-e231-4674-b0c5-635bb5ca45d0-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.187045 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4vqr\" (UniqueName: \"kubernetes.io/projected/5e557ffc-1883-454b-bd9c-ba330ae4cbef-kube-api-access-s4vqr\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.187058 4704 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e557ffc-1883-454b-bd9c-ba330ae4cbef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.189556 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de55292b-e231-4674-b0c5-635bb5ca45d0-kube-api-access-vft94" (OuterVolumeSpecName: "kube-api-access-vft94") pod "de55292b-e231-4674-b0c5-635bb5ca45d0" (UID: "de55292b-e231-4674-b0c5-635bb5ca45d0"). InnerVolumeSpecName "kube-api-access-vft94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.211730 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de55292b-e231-4674-b0c5-635bb5ca45d0" (UID: "de55292b-e231-4674-b0c5-635bb5ca45d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.227831 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-config-data" (OuterVolumeSpecName: "config-data") pod "de55292b-e231-4674-b0c5-635bb5ca45d0" (UID: "de55292b-e231-4674-b0c5-635bb5ca45d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.265517 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "de55292b-e231-4674-b0c5-635bb5ca45d0" (UID: "de55292b-e231-4674-b0c5-635bb5ca45d0"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.288424 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vft94\" (UniqueName: \"kubernetes.io/projected/de55292b-e231-4674-b0c5-635bb5ca45d0-kube-api-access-vft94\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.288692 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.288703 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.288712 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de55292b-e231-4674-b0c5-635bb5ca45d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.393995 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.394567 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-central-agent" containerID="cri-o://66f5ade15a8ec877cbffc7dad344990c4dc9c7952e4efc0c7b39474f1f6d50a6" gracePeriod=30 Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.394654 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="proxy-httpd" containerID="cri-o://d33b1efe1b665bf6e7f08d44e358a2502f501a0d2e4c11ecb1ee5e5b877a1132" gracePeriod=30 Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.394648 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-notification-agent" containerID="cri-o://c8d796cdfb6d7a46a243a6a48d0b846c4ff51bc5e2c86aa71a4494b7d567cdce" gracePeriod=30 Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.394627 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="sg-core" containerID="cri-o://80ba83e990d481a5601993ffdb8982c86b29faa1d8d286aaaa4f356a3aff3efa" gracePeriod=30 Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.574572 4704 generic.go:334] "Generic (PLEG): container finished" podID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerID="d33b1efe1b665bf6e7f08d44e358a2502f501a0d2e4c11ecb1ee5e5b877a1132" exitCode=0 Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.574610 4704 generic.go:334] "Generic (PLEG): container finished" podID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerID="80ba83e990d481a5601993ffdb8982c86b29faa1d8d286aaaa4f356a3aff3efa" exitCode=2 Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.574671 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerDied","Data":"d33b1efe1b665bf6e7f08d44e358a2502f501a0d2e4c11ecb1ee5e5b877a1132"} Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.574700 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerDied","Data":"80ba83e990d481a5601993ffdb8982c86b29faa1d8d286aaaa4f356a3aff3efa"} Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.576993 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" event={"ID":"5e557ffc-1883-454b-bd9c-ba330ae4cbef","Type":"ContainerDied","Data":"48154123501c26f31521610fca5e566ecb941d3905daaba559d9e39fc5193b4e"} Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.577323 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48154123501c26f31521610fca5e566ecb941d3905daaba559d9e39fc5193b4e" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.577007 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-kwz5c" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.587490 4704 generic.go:334] "Generic (PLEG): container finished" podID="de55292b-e231-4674-b0c5-635bb5ca45d0" containerID="63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" exitCode=0 Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.587530 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"de55292b-e231-4674-b0c5-635bb5ca45d0","Type":"ContainerDied","Data":"63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd"} Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.587552 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"de55292b-e231-4674-b0c5-635bb5ca45d0","Type":"ContainerDied","Data":"e4319795cc11f451845747b547c3c2fd6ce750a8dd15be5a0edf4d6ea86e16f7"} Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.587570 4704 scope.go:117] "RemoveContainer" containerID="63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.587689 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.626544 4704 scope.go:117] "RemoveContainer" containerID="63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" Jan 22 17:04:13 crc kubenswrapper[4704]: E0122 17:04:13.626996 4704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd\": container with ID starting with 63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd not found: ID does not exist" containerID="63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.627038 4704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd"} err="failed to get container status \"63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd\": rpc error: code = NotFound desc = could not find container \"63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd\": container with ID starting with 63af5e7041a2963a3ad64d2a8b6c9fe9d99a0e88088cfaa9cd98701e235533cd not found: ID does not exist" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.645417 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" path="/var/lib/kubelet/pods/23173b76-e787-4014-bf87-f8d0f76483c8/volumes" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.646292 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" path="/var/lib/kubelet/pods/f2af8130-e779-48b7-9eb2-fa1c2f709020/volumes" Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.646786 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:04:13 crc kubenswrapper[4704]: I0122 17:04:13.649651 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.603814 4704 generic.go:334] "Generic (PLEG): container finished" podID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerID="c8d796cdfb6d7a46a243a6a48d0b846c4ff51bc5e2c86aa71a4494b7d567cdce" exitCode=0 Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.604223 4704 generic.go:334] "Generic (PLEG): container finished" podID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerID="66f5ade15a8ec877cbffc7dad344990c4dc9c7952e4efc0c7b39474f1f6d50a6" exitCode=0 Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.603874 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerDied","Data":"c8d796cdfb6d7a46a243a6a48d0b846c4ff51bc5e2c86aa71a4494b7d567cdce"} Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.604303 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerDied","Data":"66f5ade15a8ec877cbffc7dad344990c4dc9c7952e4efc0c7b39474f1f6d50a6"} Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.604321 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61","Type":"ContainerDied","Data":"01d959037ba021a0b25841a1e742d215d966db96d75b2515c261db29fbdf7273"} Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.604336 4704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d959037ba021a0b25841a1e742d215d966db96d75b2515c261db29fbdf7273" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.637911 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711639 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-combined-ca-bundle\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711681 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-scripts\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711715 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-log-httpd\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711754 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-config-data\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711818 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqshn\" (UniqueName: \"kubernetes.io/projected/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-kube-api-access-sqshn\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711875 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-run-httpd\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711924 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-sg-core-conf-yaml\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.711938 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-ceilometer-tls-certs\") pod \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\" (UID: \"6b86cf71-6d88-43c9-a1d9-94aee9ee4b61\") " Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.714101 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.714222 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.718891 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-scripts" (OuterVolumeSpecName: "scripts") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.719957 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-kube-api-access-sqshn" (OuterVolumeSpecName: "kube-api-access-sqshn") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "kube-api-access-sqshn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.768305 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.774459 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.781715 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.808128 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-config-data" (OuterVolumeSpecName: "config-data") pod "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" (UID: "6b86cf71-6d88-43c9-a1d9-94aee9ee4b61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.813670 4704 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.813807 4704 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.813899 4704 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.813977 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.814058 4704 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.814124 4704 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.814202 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:14 crc kubenswrapper[4704]: I0122 17:04:14.814258 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqshn\" (UniqueName: \"kubernetes.io/projected/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61-kube-api-access-sqshn\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.407073 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9jk29"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.413667 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9jk29"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.423819 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-kwz5c"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.429663 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-kwz5c"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.435881 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-n9krw"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.442920 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-n9krw"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.613905 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.648720 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f718b5d-3eed-45b7-a6eb-a63797e882d3" path="/var/lib/kubelet/pods/3f718b5d-3eed-45b7-a6eb-a63797e882d3/volumes" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.649321 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e557ffc-1883-454b-bd9c-ba330ae4cbef" path="/var/lib/kubelet/pods/5e557ffc-1883-454b-bd9c-ba330ae4cbef/volumes" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.649845 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de55292b-e231-4674-b0c5-635bb5ca45d0" path="/var/lib/kubelet/pods/de55292b-e231-4674-b0c5-635bb5ca45d0/volumes" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.650835 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa2a59d-955e-4b4f-8092-fa24ba640086" path="/var/lib/kubelet/pods/eaa2a59d-955e-4b4f-8092-fa24ba640086/volumes" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.656546 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.663945 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670524 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670844 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-kuttl-api-log" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670860 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-kuttl-api-log" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670868 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-kuttl-api-log" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670874 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-kuttl-api-log" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670887 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-api" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670893 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-api" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670903 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-notification-agent" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670909 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-notification-agent" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670930 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-central-agent" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670936 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-central-agent" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670947 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="sg-core" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670953 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="sg-core" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670961 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e557ffc-1883-454b-bd9c-ba330ae4cbef" containerName="mariadb-account-delete" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670967 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e557ffc-1883-454b-bd9c-ba330ae4cbef" containerName="mariadb-account-delete" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670978 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de55292b-e231-4674-b0c5-635bb5ca45d0" containerName="watcher-applier" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670983 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="de55292b-e231-4674-b0c5-635bb5ca45d0" containerName="watcher-applier" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.670992 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-api" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.670998 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-api" Jan 22 17:04:15 crc kubenswrapper[4704]: E0122 17:04:15.671008 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="proxy-httpd" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671013 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="proxy-httpd" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671146 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-notification-agent" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671159 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-kuttl-api-log" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671167 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-kuttl-api-log" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671177 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e557ffc-1883-454b-bd9c-ba330ae4cbef" containerName="mariadb-account-delete" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671186 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="proxy-httpd" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671192 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2af8130-e779-48b7-9eb2-fa1c2f709020" containerName="watcher-api" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671201 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="sg-core" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671219 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="23173b76-e787-4014-bf87-f8d0f76483c8" containerName="watcher-api" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671229 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" containerName="ceilometer-central-agent" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.671241 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="de55292b-e231-4674-b0c5-635bb5ca45d0" containerName="watcher-applier" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.672542 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.674521 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.674910 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.677869 4704 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728235 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv2ph\" (UniqueName: \"kubernetes.io/projected/6264a778-2267-4b74-935f-9657112e560e-kube-api-access-rv2ph\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728303 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728327 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728358 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-config-data\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728383 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-scripts\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728403 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6264a778-2267-4b74-935f-9657112e560e-log-httpd\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728506 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.728570 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6264a778-2267-4b74-935f-9657112e560e-run-httpd\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.741251 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.829976 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.830012 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.830038 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-config-data\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.830052 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-scripts\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.830067 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6264a778-2267-4b74-935f-9657112e560e-log-httpd\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.830124 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.830197 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6264a778-2267-4b74-935f-9657112e560e-run-httpd\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.830271 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv2ph\" (UniqueName: \"kubernetes.io/projected/6264a778-2267-4b74-935f-9657112e560e-kube-api-access-rv2ph\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.831383 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6264a778-2267-4b74-935f-9657112e560e-run-httpd\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.831396 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6264a778-2267-4b74-935f-9657112e560e-log-httpd\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.835192 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-scripts\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.835210 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.835210 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.835550 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-config-data\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.843100 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6264a778-2267-4b74-935f-9657112e560e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.857935 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv2ph\" (UniqueName: \"kubernetes.io/projected/6264a778-2267-4b74-935f-9657112e560e-kube-api-access-rv2ph\") pod \"ceilometer-0\" (UID: \"6264a778-2267-4b74-935f-9657112e560e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:15 crc kubenswrapper[4704]: I0122 17:04:15.987515 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:16 crc kubenswrapper[4704]: I0122 17:04:16.462178 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 17:04:16 crc kubenswrapper[4704]: I0122 17:04:16.622651 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6264a778-2267-4b74-935f-9657112e560e","Type":"ContainerStarted","Data":"ce5946192a6fabc07a5adbaf41b679f68cccbfdd2231a4d8a8c22c01fff39701"} Jan 22 17:04:17 crc kubenswrapper[4704]: I0122 17:04:17.640374 4704 generic.go:334] "Generic (PLEG): container finished" podID="b866f01a-a70c-4f93-b005-3661f5a1be3c" containerID="691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2" exitCode=0 Jan 22 17:04:17 crc kubenswrapper[4704]: I0122 17:04:17.645905 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b86cf71-6d88-43c9-a1d9-94aee9ee4b61" path="/var/lib/kubelet/pods/6b86cf71-6d88-43c9-a1d9-94aee9ee4b61/volumes" Jan 22 17:04:17 crc kubenswrapper[4704]: I0122 17:04:17.646582 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b866f01a-a70c-4f93-b005-3661f5a1be3c","Type":"ContainerDied","Data":"691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2"} Jan 22 17:04:17 crc kubenswrapper[4704]: I0122 17:04:17.649016 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6264a778-2267-4b74-935f-9657112e560e","Type":"ContainerStarted","Data":"19526744eee54a0263a66d15f8f948e8376db108bbe94a9627ade783fa3a5602"} Jan 22 17:04:17 crc kubenswrapper[4704]: I0122 17:04:17.982953 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.066031 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-cert-memcached-mtls\") pod \"b866f01a-a70c-4f93-b005-3661f5a1be3c\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.066100 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp8s8\" (UniqueName: \"kubernetes.io/projected/b866f01a-a70c-4f93-b005-3661f5a1be3c-kube-api-access-xp8s8\") pod \"b866f01a-a70c-4f93-b005-3661f5a1be3c\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.066148 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-combined-ca-bundle\") pod \"b866f01a-a70c-4f93-b005-3661f5a1be3c\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.066211 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-custom-prometheus-ca\") pod \"b866f01a-a70c-4f93-b005-3661f5a1be3c\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.066257 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b866f01a-a70c-4f93-b005-3661f5a1be3c-logs\") pod \"b866f01a-a70c-4f93-b005-3661f5a1be3c\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.066378 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-config-data\") pod \"b866f01a-a70c-4f93-b005-3661f5a1be3c\" (UID: \"b866f01a-a70c-4f93-b005-3661f5a1be3c\") " Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.067741 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b866f01a-a70c-4f93-b005-3661f5a1be3c-logs" (OuterVolumeSpecName: "logs") pod "b866f01a-a70c-4f93-b005-3661f5a1be3c" (UID: "b866f01a-a70c-4f93-b005-3661f5a1be3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.071254 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b866f01a-a70c-4f93-b005-3661f5a1be3c-kube-api-access-xp8s8" (OuterVolumeSpecName: "kube-api-access-xp8s8") pod "b866f01a-a70c-4f93-b005-3661f5a1be3c" (UID: "b866f01a-a70c-4f93-b005-3661f5a1be3c"). InnerVolumeSpecName "kube-api-access-xp8s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.087816 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b866f01a-a70c-4f93-b005-3661f5a1be3c" (UID: "b866f01a-a70c-4f93-b005-3661f5a1be3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.110972 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b866f01a-a70c-4f93-b005-3661f5a1be3c" (UID: "b866f01a-a70c-4f93-b005-3661f5a1be3c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.120403 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-config-data" (OuterVolumeSpecName: "config-data") pod "b866f01a-a70c-4f93-b005-3661f5a1be3c" (UID: "b866f01a-a70c-4f93-b005-3661f5a1be3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.150273 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b866f01a-a70c-4f93-b005-3661f5a1be3c" (UID: "b866f01a-a70c-4f93-b005-3661f5a1be3c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.171268 4704 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.171312 4704 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.171329 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp8s8\" (UniqueName: \"kubernetes.io/projected/b866f01a-a70c-4f93-b005-3661f5a1be3c-kube-api-access-xp8s8\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.171341 4704 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.171352 4704 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b866f01a-a70c-4f93-b005-3661f5a1be3c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.171362 4704 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b866f01a-a70c-4f93-b005-3661f5a1be3c-logs\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.658744 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b866f01a-a70c-4f93-b005-3661f5a1be3c","Type":"ContainerDied","Data":"b04017efaec0ce464f7b50419996af91c5b621b1174b7251b6b39936b7dcc117"} Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.658840 4704 scope.go:117] "RemoveContainer" containerID="691b68817495b4f418a120416a05999c3816906d8ddd3c548fcff63925a692e2" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.658769 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.673117 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6264a778-2267-4b74-935f-9657112e560e","Type":"ContainerStarted","Data":"c70be8dd207d899ce82336f0ab270d16428daf2c1f21df5412393fca515d08b9"} Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.673417 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6264a778-2267-4b74-935f-9657112e560e","Type":"ContainerStarted","Data":"bcc46c52560ded0a3e09f7e0dee76eff3d2871a058d5521ec7ab02a5ff11797f"} Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.695227 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:04:18 crc kubenswrapper[4704]: I0122 17:04:18.704059 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 17:04:19 crc kubenswrapper[4704]: I0122 17:04:19.085935 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:04:19 crc kubenswrapper[4704]: I0122 17:04:19.085993 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:04:19 crc kubenswrapper[4704]: I0122 17:04:19.651671 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b866f01a-a70c-4f93-b005-3661f5a1be3c" path="/var/lib/kubelet/pods/b866f01a-a70c-4f93-b005-3661f5a1be3c/volumes" Jan 22 17:04:20 crc kubenswrapper[4704]: I0122 17:04:20.691068 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6264a778-2267-4b74-935f-9657112e560e","Type":"ContainerStarted","Data":"2f3ab266b6411c499ed10032ada82a36c2e67eb3e99bcea6d9e5649e3edcd498"} Jan 22 17:04:20 crc kubenswrapper[4704]: I0122 17:04:20.691925 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:20 crc kubenswrapper[4704]: I0122 17:04:20.720320 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.747515343 podStartE2EDuration="5.720295673s" podCreationTimestamp="2026-01-22 17:04:15 +0000 UTC" firstStartedPulling="2026-01-22 17:04:16.476583555 +0000 UTC m=+2149.121130255" lastFinishedPulling="2026-01-22 17:04:19.449363885 +0000 UTC m=+2152.093910585" observedRunningTime="2026-01-22 17:04:20.71350163 +0000 UTC m=+2153.358048330" watchObservedRunningTime="2026-01-22 17:04:20.720295673 +0000 UTC m=+2153.364842373" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.466394 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4m9vx"] Jan 22 17:04:31 crc kubenswrapper[4704]: E0122 17:04:31.467831 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b866f01a-a70c-4f93-b005-3661f5a1be3c" containerName="watcher-decision-engine" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.467916 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="b866f01a-a70c-4f93-b005-3661f5a1be3c" containerName="watcher-decision-engine" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.468102 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="b866f01a-a70c-4f93-b005-3661f5a1be3c" containerName="watcher-decision-engine" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.469606 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.488431 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m9vx"] Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.495431 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-utilities\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.495516 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksmtf\" (UniqueName: \"kubernetes.io/projected/88fba716-7f3a-4bde-b0ee-b62a75783db2-kube-api-access-ksmtf\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.495657 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-catalog-content\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.612328 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-utilities\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.612709 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksmtf\" (UniqueName: \"kubernetes.io/projected/88fba716-7f3a-4bde-b0ee-b62a75783db2-kube-api-access-ksmtf\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.612940 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-catalog-content\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.614078 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-catalog-content\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.614437 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-utilities\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.634556 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksmtf\" (UniqueName: \"kubernetes.io/projected/88fba716-7f3a-4bde-b0ee-b62a75783db2-kube-api-access-ksmtf\") pod \"certified-operators-4m9vx\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:31 crc kubenswrapper[4704]: I0122 17:04:31.795975 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:32 crc kubenswrapper[4704]: I0122 17:04:32.278463 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m9vx"] Jan 22 17:04:32 crc kubenswrapper[4704]: I0122 17:04:32.803393 4704 generic.go:334] "Generic (PLEG): container finished" podID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerID="46bd76a814f50972ac8926954be498c516ac6c541ed8c1a154333ee357da8aae" exitCode=0 Jan 22 17:04:32 crc kubenswrapper[4704]: I0122 17:04:32.803447 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m9vx" event={"ID":"88fba716-7f3a-4bde-b0ee-b62a75783db2","Type":"ContainerDied","Data":"46bd76a814f50972ac8926954be498c516ac6c541ed8c1a154333ee357da8aae"} Jan 22 17:04:32 crc kubenswrapper[4704]: I0122 17:04:32.803482 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m9vx" event={"ID":"88fba716-7f3a-4bde-b0ee-b62a75783db2","Type":"ContainerStarted","Data":"81b70ee643a2da6e68a9116de75cc5e63a7648a6601720bc253cd25ee3d259bb"} Jan 22 17:04:33 crc kubenswrapper[4704]: I0122 17:04:33.813995 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m9vx" event={"ID":"88fba716-7f3a-4bde-b0ee-b62a75783db2","Type":"ContainerStarted","Data":"ae5182aabb66e74f64e4797de9b025d24d1bd6567a5efe58f915759fc945058b"} Jan 22 17:04:34 crc kubenswrapper[4704]: I0122 17:04:34.841279 4704 generic.go:334] "Generic (PLEG): container finished" podID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerID="ae5182aabb66e74f64e4797de9b025d24d1bd6567a5efe58f915759fc945058b" exitCode=0 Jan 22 17:04:34 crc kubenswrapper[4704]: I0122 17:04:34.841852 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m9vx" event={"ID":"88fba716-7f3a-4bde-b0ee-b62a75783db2","Type":"ContainerDied","Data":"ae5182aabb66e74f64e4797de9b025d24d1bd6567a5efe58f915759fc945058b"} Jan 22 17:04:35 crc kubenswrapper[4704]: I0122 17:04:35.853528 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m9vx" event={"ID":"88fba716-7f3a-4bde-b0ee-b62a75783db2","Type":"ContainerStarted","Data":"a0ac0d5538cf22027bf29494f274647a86c629e48fe8c7e97f86fbb379bd090c"} Jan 22 17:04:35 crc kubenswrapper[4704]: I0122 17:04:35.872039 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4m9vx" podStartSLOduration=2.42752043 podStartE2EDuration="4.872020075s" podCreationTimestamp="2026-01-22 17:04:31 +0000 UTC" firstStartedPulling="2026-01-22 17:04:32.806564777 +0000 UTC m=+2165.451111477" lastFinishedPulling="2026-01-22 17:04:35.251064422 +0000 UTC m=+2167.895611122" observedRunningTime="2026-01-22 17:04:35.867469906 +0000 UTC m=+2168.512016616" watchObservedRunningTime="2026-01-22 17:04:35.872020075 +0000 UTC m=+2168.516566775" Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.796599 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.798017 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.905280 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.955220 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kw27s/must-gather-7swx5"] Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.957224 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.962131 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kw27s"/"kube-root-ca.crt" Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.963130 4704 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kw27s"/"openshift-service-ca.crt" Jan 22 17:04:41 crc kubenswrapper[4704]: I0122 17:04:41.992435 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kw27s/must-gather-7swx5"] Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.017207 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsvv\" (UniqueName: \"kubernetes.io/projected/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-kube-api-access-8hsvv\") pod \"must-gather-7swx5\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.017290 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-must-gather-output\") pod \"must-gather-7swx5\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.046267 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.118518 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsvv\" (UniqueName: \"kubernetes.io/projected/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-kube-api-access-8hsvv\") pod \"must-gather-7swx5\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.118603 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-must-gather-output\") pod \"must-gather-7swx5\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.118999 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-must-gather-output\") pod \"must-gather-7swx5\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.141591 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsvv\" (UniqueName: \"kubernetes.io/projected/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-kube-api-access-8hsvv\") pod \"must-gather-7swx5\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.273896 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.706307 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kw27s/must-gather-7swx5"] Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.710572 4704 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:04:42 crc kubenswrapper[4704]: I0122 17:04:42.909977 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kw27s/must-gather-7swx5" event={"ID":"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8","Type":"ContainerStarted","Data":"dfb70f7d932c49026834b2176a7f0b83b480f453cc73a86d68609183a5f3a99c"} Jan 22 17:04:45 crc kubenswrapper[4704]: I0122 17:04:45.464443 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m9vx"] Jan 22 17:04:45 crc kubenswrapper[4704]: I0122 17:04:45.465351 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4m9vx" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="registry-server" containerID="cri-o://a0ac0d5538cf22027bf29494f274647a86c629e48fe8c7e97f86fbb379bd090c" gracePeriod=2 Jan 22 17:04:45 crc kubenswrapper[4704]: I0122 17:04:45.941513 4704 generic.go:334] "Generic (PLEG): container finished" podID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerID="a0ac0d5538cf22027bf29494f274647a86c629e48fe8c7e97f86fbb379bd090c" exitCode=0 Jan 22 17:04:45 crc kubenswrapper[4704]: I0122 17:04:45.941550 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m9vx" event={"ID":"88fba716-7f3a-4bde-b0ee-b62a75783db2","Type":"ContainerDied","Data":"a0ac0d5538cf22027bf29494f274647a86c629e48fe8c7e97f86fbb379bd090c"} Jan 22 17:04:45 crc kubenswrapper[4704]: I0122 17:04:45.996249 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.086711 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.087567 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.087619 4704 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.088210 4704 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd"} pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.088260 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" containerID="cri-o://23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" gracePeriod=600 Jan 22 17:04:49 crc kubenswrapper[4704]: E0122 17:04:49.599919 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.665227 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.840369 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksmtf\" (UniqueName: \"kubernetes.io/projected/88fba716-7f3a-4bde-b0ee-b62a75783db2-kube-api-access-ksmtf\") pod \"88fba716-7f3a-4bde-b0ee-b62a75783db2\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.840443 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-catalog-content\") pod \"88fba716-7f3a-4bde-b0ee-b62a75783db2\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.840479 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-utilities\") pod \"88fba716-7f3a-4bde-b0ee-b62a75783db2\" (UID: \"88fba716-7f3a-4bde-b0ee-b62a75783db2\") " Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.841977 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-utilities" (OuterVolumeSpecName: "utilities") pod "88fba716-7f3a-4bde-b0ee-b62a75783db2" (UID: "88fba716-7f3a-4bde-b0ee-b62a75783db2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.850469 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fba716-7f3a-4bde-b0ee-b62a75783db2-kube-api-access-ksmtf" (OuterVolumeSpecName: "kube-api-access-ksmtf") pod "88fba716-7f3a-4bde-b0ee-b62a75783db2" (UID: "88fba716-7f3a-4bde-b0ee-b62a75783db2"). InnerVolumeSpecName "kube-api-access-ksmtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.885492 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88fba716-7f3a-4bde-b0ee-b62a75783db2" (UID: "88fba716-7f3a-4bde-b0ee-b62a75783db2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.942526 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksmtf\" (UniqueName: \"kubernetes.io/projected/88fba716-7f3a-4bde-b0ee-b62a75783db2-kube-api-access-ksmtf\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.942569 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:49 crc kubenswrapper[4704]: I0122 17:04:49.942581 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fba716-7f3a-4bde-b0ee-b62a75783db2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.005423 4704 generic.go:334] "Generic (PLEG): container finished" podID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" exitCode=0 Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.005479 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerDied","Data":"23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd"} Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.005510 4704 scope.go:117] "RemoveContainer" containerID="fbfd2dfdd7d5192b0d486e087debbb041d258bd9f348744c87a1d512ab989a16" Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.006812 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:04:50 crc kubenswrapper[4704]: E0122 17:04:50.007140 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.011609 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m9vx" event={"ID":"88fba716-7f3a-4bde-b0ee-b62a75783db2","Type":"ContainerDied","Data":"81b70ee643a2da6e68a9116de75cc5e63a7648a6601720bc253cd25ee3d259bb"} Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.011638 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m9vx" Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.014771 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kw27s/must-gather-7swx5" event={"ID":"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8","Type":"ContainerStarted","Data":"d467fcb67c7f75fea34f97f051823b3b226c88e6b1df62c6915e3eaa94cb4a3c"} Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.070433 4704 scope.go:117] "RemoveContainer" containerID="a0ac0d5538cf22027bf29494f274647a86c629e48fe8c7e97f86fbb379bd090c" Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.071996 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m9vx"] Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.085738 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4m9vx"] Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.094267 4704 scope.go:117] "RemoveContainer" containerID="ae5182aabb66e74f64e4797de9b025d24d1bd6567a5efe58f915759fc945058b" Jan 22 17:04:50 crc kubenswrapper[4704]: I0122 17:04:50.119685 4704 scope.go:117] "RemoveContainer" containerID="46bd76a814f50972ac8926954be498c516ac6c541ed8c1a154333ee357da8aae" Jan 22 17:04:51 crc kubenswrapper[4704]: I0122 17:04:51.028636 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kw27s/must-gather-7swx5" event={"ID":"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8","Type":"ContainerStarted","Data":"d91d70b0b89bfd4bf31a41eea2154fa5c683549e7ea1b0911cdbade824bcac43"} Jan 22 17:04:51 crc kubenswrapper[4704]: I0122 17:04:51.047253 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kw27s/must-gather-7swx5" podStartSLOduration=3.086374577 podStartE2EDuration="10.047231114s" podCreationTimestamp="2026-01-22 17:04:41 +0000 UTC" firstStartedPulling="2026-01-22 17:04:42.710532772 +0000 UTC m=+2175.355079472" lastFinishedPulling="2026-01-22 17:04:49.671389309 +0000 UTC m=+2182.315936009" observedRunningTime="2026-01-22 17:04:51.045198006 +0000 UTC m=+2183.689744716" watchObservedRunningTime="2026-01-22 17:04:51.047231114 +0000 UTC m=+2183.691777824" Jan 22 17:04:51 crc kubenswrapper[4704]: I0122 17:04:51.644736 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" path="/var/lib/kubelet/pods/88fba716-7f3a-4bde-b0ee-b62a75783db2/volumes" Jan 22 17:05:00 crc kubenswrapper[4704]: I0122 17:05:00.633705 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:05:00 crc kubenswrapper[4704]: E0122 17:05:00.634471 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:05:15 crc kubenswrapper[4704]: I0122 17:05:15.634051 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:05:15 crc kubenswrapper[4704]: E0122 17:05:15.634732 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:05:28 crc kubenswrapper[4704]: I0122 17:05:28.633560 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:05:28 crc kubenswrapper[4704]: E0122 17:05:28.634435 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:05:39 crc kubenswrapper[4704]: I0122 17:05:39.272352 4704 scope.go:117] "RemoveContainer" containerID="8686670078b96dc7ab4fa75139ef50eef55b4c8611c67041e7d9e25e4cd25fe3" Jan 22 17:05:43 crc kubenswrapper[4704]: I0122 17:05:43.634364 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:05:43 crc kubenswrapper[4704]: E0122 17:05:43.635128 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:05:56 crc kubenswrapper[4704]: I0122 17:05:56.633501 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:05:56 crc kubenswrapper[4704]: E0122 17:05:56.634126 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.179157 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g_a94e2442-d107-46dc-98fe-8bfaeb91b0e6/util/0.log" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.286380 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g_a94e2442-d107-46dc-98fe-8bfaeb91b0e6/util/0.log" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.357458 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g_a94e2442-d107-46dc-98fe-8bfaeb91b0e6/pull/0.log" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.388918 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g_a94e2442-d107-46dc-98fe-8bfaeb91b0e6/pull/0.log" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.548252 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g_a94e2442-d107-46dc-98fe-8bfaeb91b0e6/util/0.log" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.579770 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g_a94e2442-d107-46dc-98fe-8bfaeb91b0e6/extract/0.log" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.807285 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-g4q7s_170b5a59-8ffd-47a8-b2b9-a0f48167050d/manager/0.log" Jan 22 17:05:58 crc kubenswrapper[4704]: I0122 17:05:58.883825 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2326c43198e87bb1199365a9d6de5d4fd3e056d42b2f729fd861bf5d22s2w8g_a94e2442-d107-46dc-98fe-8bfaeb91b0e6/pull/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.077320 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-hd8tx_068092e4-bd7d-4f6f-8806-b794f3dbf696/manager/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.190476 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-4hzcj_50d3d899-4725-4b05-8dc8-84152766e963/manager/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.292656 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn_257eeca9-b568-4dba-8647-c37428c6f7b9/util/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.717423 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn_257eeca9-b568-4dba-8647-c37428c6f7b9/pull/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.758213 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn_257eeca9-b568-4dba-8647-c37428c6f7b9/util/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.786505 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn_257eeca9-b568-4dba-8647-c37428c6f7b9/pull/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.970058 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn_257eeca9-b568-4dba-8647-c37428c6f7b9/pull/0.log" Jan 22 17:05:59 crc kubenswrapper[4704]: I0122 17:05:59.988962 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn_257eeca9-b568-4dba-8647-c37428c6f7b9/extract/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.004968 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e29fb4fbf64e188031c82abfde2c28621f4bfdc1c417658cc96723c26cw5vnn_257eeca9-b568-4dba-8647-c37428c6f7b9/util/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.126015 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-nm6c8_17fe4464-7b64-4efe-b95b-89834259fc79/manager/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.176189 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-ggdqg_7974da72-060f-48cb-b06e-7fae3ecd377d/manager/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.336621 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-5w58r_f166ae0f-3591-4099-bd69-62ec09ba977a/manager/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.602002 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-nggqz_dd19d6a3-d166-41b8-ac16-76d87c51cad5/manager/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.618962 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-77kz5_3c79bdf7-d523-40e2-8539-f28025e1a92f/manager/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.811582 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-b6xnp_b5539e8b-5116-4c16-9b27-6b5958450759/manager/0.log" Jan 22 17:06:00 crc kubenswrapper[4704]: I0122 17:06:00.832610 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-7jps5_8be9d1b7-ad8a-41b0-a578-e26baafcf932/manager/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.010418 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-txdkv_52786693-8d66-4a9d-aff2-b6d4b7c260be/manager/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.035957 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-rkxpv_9cbde52d-972f-41dc-b9b0-6cd275d013a8/manager/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.222061 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-s59f7_48e30eae-1a73-45ab-8ce9-0e64d820d7d6/manager/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.250216 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-pmcms_8ab35638-b730-42d8-ab86-d7573f3b5083/manager/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.427052 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544gxws_a831d8ed-7a07-4105-9c36-c0ce0a60d1db/manager/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.720389 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mzkrc_3ca8c5ba-8c1d-4566-8b22-ce0ba4f10914/registry-server/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.876173 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-675f79667-ng9s7_649e2df4-8666-44f5-9038-275030931053/manager/0.log" Jan 22 17:06:01 crc kubenswrapper[4704]: I0122 17:06:01.924130 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-2ntql_d7747ccf-7f71-46a7-86b2-782561d8c41c/manager/0.log" Jan 22 17:06:02 crc kubenswrapper[4704]: I0122 17:06:02.056851 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-w2xzp_36ac804d-cc67-4975-9b4d-6ccaed33f8e9/manager/0.log" Jan 22 17:06:02 crc kubenswrapper[4704]: I0122 17:06:02.148014 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nnggx_cc5ed116-27c3-4b5d-9fe3-812c0eec8828/operator/0.log" Jan 22 17:06:02 crc kubenswrapper[4704]: I0122 17:06:02.260039 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-h2sh7_361a820d-5d68-41d8-834e-8faf6862ac00/manager/0.log" Jan 22 17:06:02 crc kubenswrapper[4704]: I0122 17:06:02.460918 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-sc4sv_115e9b6d-342e-4161-80a7-fd6786dd97ab/manager/0.log" Jan 22 17:06:02 crc kubenswrapper[4704]: I0122 17:06:02.517217 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-xp2tx_1344217d-c8f9-4f2a-aaba-588a1993e4d2/manager/0.log" Jan 22 17:06:02 crc kubenswrapper[4704]: I0122 17:06:02.819533 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-9jxrh_ce88084a-02b7-45de-bdc8-629e934784ca/registry-server/0.log" Jan 22 17:06:02 crc kubenswrapper[4704]: I0122 17:06:02.930473 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-d9d597c89-9ck8m_99b0e241-3467-43d4-8c17-f1b95d4ea8c3/manager/0.log" Jan 22 17:06:07 crc kubenswrapper[4704]: I0122 17:06:07.639675 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:06:07 crc kubenswrapper[4704]: E0122 17:06:07.640420 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:06:23 crc kubenswrapper[4704]: I0122 17:06:23.633624 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:06:23 crc kubenswrapper[4704]: E0122 17:06:23.634563 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:06:24 crc kubenswrapper[4704]: I0122 17:06:24.326860 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4p2x6_27ee8df2-66e3-4de7-a2c3-c0687e535125/control-plane-machine-set-operator/0.log" Jan 22 17:06:24 crc kubenswrapper[4704]: I0122 17:06:24.548942 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hgdwt_822794ef-a29d-43bb-8e01-ab9aa44ed0be/kube-rbac-proxy/0.log" Jan 22 17:06:24 crc kubenswrapper[4704]: I0122 17:06:24.576754 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hgdwt_822794ef-a29d-43bb-8e01-ab9aa44ed0be/machine-api-operator/0.log" Jan 22 17:06:34 crc kubenswrapper[4704]: I0122 17:06:34.634080 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:06:34 crc kubenswrapper[4704]: E0122 17:06:34.635368 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:06:37 crc kubenswrapper[4704]: I0122 17:06:37.760005 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-6zj75_571d56d1-f2fc-41ab-aff3-d5ae31849f8e/cert-manager-controller/0.log" Jan 22 17:06:37 crc kubenswrapper[4704]: I0122 17:06:37.899077 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2kvx9_f4b1b654-56be-40f7-9051-3a9cd248d3fa/cert-manager-cainjector/0.log" Jan 22 17:06:37 crc kubenswrapper[4704]: I0122 17:06:37.979242 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-c72g6_a268e464-161c-413c-ac49-da3a0c827514/cert-manager-webhook/0.log" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.349533 4704 scope.go:117] "RemoveContainer" containerID="de197db95e4be9977e67d7e90e976193a01340422df2f08260af5f191f1de523" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.373578 4704 scope.go:117] "RemoveContainer" containerID="cd97f2e4d15db4e70b0de4195401bdcd48ea6be29b89a0a4479cb95f841b3176" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.416244 4704 scope.go:117] "RemoveContainer" containerID="784059c66a6c881f4f8187b6cdb1d8b3c9e40f01195ed03b40033f1abc354dcb" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.445690 4704 scope.go:117] "RemoveContainer" containerID="61e13668809eb9fe61020d6754a250461e4c2ce83cf8cad4636772bed90b46cf" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.482328 4704 scope.go:117] "RemoveContainer" containerID="6fd9f84337d32aaec0b2259446873daf6a2a6b9ad3e832040170b6b25c3a23dd" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.501626 4704 scope.go:117] "RemoveContainer" containerID="607ed077fcc302dc95d7ab86055cd7f2920cb11fb0826e68d42feeb8201ed521" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.518881 4704 scope.go:117] "RemoveContainer" containerID="710d67066b59525bf4a66854465e07cdc014f82c78a4ebe4b6a984b070cc168f" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.554488 4704 scope.go:117] "RemoveContainer" containerID="87ef7ee88781891fc56a688f64f3535316f5130d5e49e9da15a49f55e356f24f" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.573716 4704 scope.go:117] "RemoveContainer" containerID="b36d471f3f5a62e16ab896f557903f89884ab6b1b0d09f008d48332194baf72d" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.596590 4704 scope.go:117] "RemoveContainer" containerID="19325db0a96b547ce615cccab8e0d7efeab30f5c6b6c5ecdf9edda4a673b1d0c" Jan 22 17:06:39 crc kubenswrapper[4704]: I0122 17:06:39.614519 4704 scope.go:117] "RemoveContainer" containerID="87dfac8c6c171566ca87abb5fba83bceaad80469729a082951d9a889cd7b5a86" Jan 22 17:06:47 crc kubenswrapper[4704]: I0122 17:06:47.638614 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:06:47 crc kubenswrapper[4704]: E0122 17:06:47.639443 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:06:50 crc kubenswrapper[4704]: I0122 17:06:50.919706 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-6qvtj_acc2a3ba-8a71-460c-979b-704ea09aa117/nmstate-console-plugin/0.log" Jan 22 17:06:51 crc kubenswrapper[4704]: I0122 17:06:51.104066 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-28zp7_07f6ae7b-7e7f-493c-bf6a-d3ff4233d9bc/nmstate-handler/0.log" Jan 22 17:06:51 crc kubenswrapper[4704]: I0122 17:06:51.167627 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hd6sv_b0c7e587-8794-4b01-ae39-83cb29c3c4c6/kube-rbac-proxy/0.log" Jan 22 17:06:51 crc kubenswrapper[4704]: I0122 17:06:51.256067 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hd6sv_b0c7e587-8794-4b01-ae39-83cb29c3c4c6/nmstate-metrics/0.log" Jan 22 17:06:51 crc kubenswrapper[4704]: I0122 17:06:51.299805 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-z9sc2_8a05cf90-d057-49da-a06d-40a9343b611b/nmstate-operator/0.log" Jan 22 17:06:51 crc kubenswrapper[4704]: I0122 17:06:51.408050 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-4rtd7_88d23917-02a3-4eba-94a8-50b5e3aa06a4/nmstate-webhook/0.log" Jan 22 17:07:02 crc kubenswrapper[4704]: I0122 17:07:02.634832 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:07:02 crc kubenswrapper[4704]: E0122 17:07:02.635955 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:07:06 crc kubenswrapper[4704]: I0122 17:07:06.384569 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-nptfd_8e5411a6-6909-463f-9794-35459abc62ff/prometheus-operator/0.log" Jan 22 17:07:06 crc kubenswrapper[4704]: I0122 17:07:06.581531 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_581b3ed3-6843-4e85-8187-2718699e8964/prometheus-operator-admission-webhook/0.log" Jan 22 17:07:06 crc kubenswrapper[4704]: I0122 17:07:06.640859 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_c8aeac14-9541-4d77-a63a-087807303ca7/prometheus-operator-admission-webhook/0.log" Jan 22 17:07:06 crc kubenswrapper[4704]: I0122 17:07:06.843532 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4tbm7_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e/operator/0.log" Jan 22 17:07:06 crc kubenswrapper[4704]: I0122 17:07:06.860060 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-nsxrl_b1514213-cdd4-4219-aefc-7d8b37aa38c4/observability-ui-dashboards/0.log" Jan 22 17:07:07 crc kubenswrapper[4704]: I0122 17:07:07.085227 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4j4ln_be3ad292-b6cf-42bc-8eee-2768c60702be/perses-operator/0.log" Jan 22 17:07:15 crc kubenswrapper[4704]: I0122 17:07:15.636335 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:07:15 crc kubenswrapper[4704]: E0122 17:07:15.637582 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:07:22 crc kubenswrapper[4704]: I0122 17:07:22.521664 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-ds86s_c0d5e1b8-9820-4023-bec6-9337958b2ffb/kube-rbac-proxy/0.log" Jan 22 17:07:22 crc kubenswrapper[4704]: I0122 17:07:22.815914 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-ds86s_c0d5e1b8-9820-4023-bec6-9337958b2ffb/controller/0.log" Jan 22 17:07:22 crc kubenswrapper[4704]: I0122 17:07:22.873313 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-frr-files/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.135613 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-reloader/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.159944 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-reloader/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.167174 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-frr-files/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.168067 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-metrics/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.363475 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-frr-files/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.372934 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-metrics/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.417694 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-metrics/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.435534 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-reloader/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.749131 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-metrics/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.754316 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-reloader/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.757616 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/cp-frr-files/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.772127 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/controller/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.922301 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/frr-metrics/0.log" Jan 22 17:07:23 crc kubenswrapper[4704]: I0122 17:07:23.953647 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/kube-rbac-proxy/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.005528 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/kube-rbac-proxy-frr/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.140785 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/reloader/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.217197 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-78tsx_c49eb63d-b748-4048-b834-c33235bbc9b6/frr-k8s-webhook-server/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.408007 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b4f96dd45-fsw4x_ba584155-01b6-46e0-b1df-a5444d77bb39/manager/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.526611 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5868d7bb64-nb9lq_7ab62bf8-d0d1-4f4c-ab39-4aa838a8587f/webhook-server/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.636875 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bhblk_c3d18830-eb73-458a-aa2f-fd3bf430d009/kube-rbac-proxy/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.962567 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bhblk_c3d18830-eb73-458a-aa2f-fd3bf430d009/speaker/0.log" Jan 22 17:07:24 crc kubenswrapper[4704]: I0122 17:07:24.973870 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-48bzl_2693c567-580c-4c07-a470-639f63bc75aa/frr/0.log" Jan 22 17:07:27 crc kubenswrapper[4704]: I0122 17:07:27.637700 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:07:27 crc kubenswrapper[4704]: E0122 17:07:27.637943 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:07:38 crc kubenswrapper[4704]: I0122 17:07:38.633888 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:07:38 crc kubenswrapper[4704]: E0122 17:07:38.634785 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:07:39 crc kubenswrapper[4704]: I0122 17:07:39.802056 4704 scope.go:117] "RemoveContainer" containerID="6c1218064fc4093e0762edae03c8db451a9f1be5979771079586d32f6dc20fad" Jan 22 17:07:39 crc kubenswrapper[4704]: I0122 17:07:39.831935 4704 scope.go:117] "RemoveContainer" containerID="613c624a94aa89dce0e5f7c167a07454e81f7f4468cc7b95f6a508b7e633c91a" Jan 22 17:07:50 crc kubenswrapper[4704]: I0122 17:07:50.964219 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_48dfb2d3-192d-4033-afcf-1abfb1a31f59/init-config-reloader/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.085518 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_48dfb2d3-192d-4033-afcf-1abfb1a31f59/init-config-reloader/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.152910 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_48dfb2d3-192d-4033-afcf-1abfb1a31f59/alertmanager/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.174773 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_48dfb2d3-192d-4033-afcf-1abfb1a31f59/config-reloader/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.316477 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6264a778-2267-4b74-935f-9657112e560e/ceilometer-central-agent/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.411290 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6264a778-2267-4b74-935f-9657112e560e/ceilometer-notification-agent/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.466122 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6264a778-2267-4b74-935f-9657112e560e/proxy-httpd/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.491893 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6264a778-2267-4b74-935f-9657112e560e/sg-core/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.634739 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:07:51 crc kubenswrapper[4704]: E0122 17:07:51.635076 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.732501 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-7b5844cd49-x5nb5_c8cb7890-5ba8-4f8f-a18a-d0ea0c36516f/keystone-api/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.771132 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-cron-29485021-xqqxb_b9789623-d528-4ee3-bb97-c687256c928c/keystone-cron/0.log" Jan 22 17:07:51 crc kubenswrapper[4704]: I0122 17:07:51.972139 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_d513054b-70e9-4e87-99ab-934736abc0bc/kube-state-metrics/0.log" Jan 22 17:07:52 crc kubenswrapper[4704]: I0122 17:07:52.411767 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_9ba981e4-1f66-452c-b481-f482feda87b3/mysql-bootstrap/0.log" Jan 22 17:07:52 crc kubenswrapper[4704]: I0122 17:07:52.619909 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_9ba981e4-1f66-452c-b481-f482feda87b3/galera/0.log" Jan 22 17:07:52 crc kubenswrapper[4704]: I0122 17:07:52.719299 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_9ba981e4-1f66-452c-b481-f482feda87b3/mysql-bootstrap/0.log" Jan 22 17:07:52 crc kubenswrapper[4704]: I0122 17:07:52.772243 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_9df6412a-01ed-4d5c-826e-956eb7aca29e/openstackclient/0.log" Jan 22 17:07:52 crc kubenswrapper[4704]: I0122 17:07:52.923669 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_45beee7e-d2c1-4150-a2d1-f9a6bf02eb42/init-config-reloader/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.107357 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_45beee7e-d2c1-4150-a2d1-f9a6bf02eb42/config-reloader/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.133763 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_45beee7e-d2c1-4150-a2d1-f9a6bf02eb42/prometheus/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.165444 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_45beee7e-d2c1-4150-a2d1-f9a6bf02eb42/init-config-reloader/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.357677 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_45beee7e-d2c1-4150-a2d1-f9a6bf02eb42/thanos-sidecar/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.447879 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_1b171faa-1b29-41f7-9582-8e8003603f75/setup-container/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.647950 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_1b171faa-1b29-41f7-9582-8e8003603f75/setup-container/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.677497 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_1b171faa-1b29-41f7-9582-8e8003603f75/rabbitmq/0.log" Jan 22 17:07:53 crc kubenswrapper[4704]: I0122 17:07:53.883429 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_e2ef8e1a-f771-48a2-a61b-866950a3f0a0/setup-container/0.log" Jan 22 17:07:54 crc kubenswrapper[4704]: I0122 17:07:54.154687 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_e2ef8e1a-f771-48a2-a61b-866950a3f0a0/setup-container/0.log" Jan 22 17:07:54 crc kubenswrapper[4704]: I0122 17:07:54.205633 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_e2ef8e1a-f771-48a2-a61b-866950a3f0a0/rabbitmq/0.log" Jan 22 17:08:02 crc kubenswrapper[4704]: I0122 17:08:02.586887 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_de39e0b8-3a6a-414d-a4db-e941e38230dd/memcached/0.log" Jan 22 17:08:06 crc kubenswrapper[4704]: I0122 17:08:06.634388 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:08:06 crc kubenswrapper[4704]: E0122 17:08:06.636422 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:08:12 crc kubenswrapper[4704]: I0122 17:08:12.592981 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr_365dc18e-4b90-48f3-9aa9-214fc97be804/util/0.log" Jan 22 17:08:12 crc kubenswrapper[4704]: I0122 17:08:12.771945 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr_365dc18e-4b90-48f3-9aa9-214fc97be804/util/0.log" Jan 22 17:08:12 crc kubenswrapper[4704]: I0122 17:08:12.837092 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr_365dc18e-4b90-48f3-9aa9-214fc97be804/pull/0.log" Jan 22 17:08:12 crc kubenswrapper[4704]: I0122 17:08:12.838759 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr_365dc18e-4b90-48f3-9aa9-214fc97be804/pull/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.024013 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr_365dc18e-4b90-48f3-9aa9-214fc97be804/pull/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.047731 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr_365dc18e-4b90-48f3-9aa9-214fc97be804/util/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.091299 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8dlhr_365dc18e-4b90-48f3-9aa9-214fc97be804/extract/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.203156 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w_29d5297c-3dd2-4a53-8945-3f6969c6085c/util/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.458039 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w_29d5297c-3dd2-4a53-8945-3f6969c6085c/pull/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.479762 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w_29d5297c-3dd2-4a53-8945-3f6969c6085c/util/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.555215 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w_29d5297c-3dd2-4a53-8945-3f6969c6085c/pull/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.688023 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w_29d5297c-3dd2-4a53-8945-3f6969c6085c/extract/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.705228 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w_29d5297c-3dd2-4a53-8945-3f6969c6085c/pull/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.707716 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc74r8w_29d5297c-3dd2-4a53-8945-3f6969c6085c/util/0.log" Jan 22 17:08:13 crc kubenswrapper[4704]: I0122 17:08:13.883651 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz_4ac8d18a-c3db-4598-aa53-dd69c190e6a3/util/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.077191 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz_4ac8d18a-c3db-4598-aa53-dd69c190e6a3/pull/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.107146 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz_4ac8d18a-c3db-4598-aa53-dd69c190e6a3/util/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.132419 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz_4ac8d18a-c3db-4598-aa53-dd69c190e6a3/pull/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.321999 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz_4ac8d18a-c3db-4598-aa53-dd69c190e6a3/extract/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.352490 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz_4ac8d18a-c3db-4598-aa53-dd69c190e6a3/pull/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.378373 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qh4nz_4ac8d18a-c3db-4598-aa53-dd69c190e6a3/util/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.516232 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4_ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d/util/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.870365 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4_ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d/util/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.913365 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4_ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d/pull/0.log" Jan 22 17:08:14 crc kubenswrapper[4704]: I0122 17:08:14.943719 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4_ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d/pull/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.068205 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4_ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d/pull/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.069762 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4_ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d/extract/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.081768 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lcqc4_ee5e4d0f-ffd8-49cb-98d7-3ac25b3dd91d/util/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.274563 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r2tck_9b1e8af3-da39-42d0-bc3e-5be66c218bfe/extract-utilities/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.397319 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r2tck_9b1e8af3-da39-42d0-bc3e-5be66c218bfe/extract-utilities/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.444154 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r2tck_9b1e8af3-da39-42d0-bc3e-5be66c218bfe/extract-content/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.481951 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r2tck_9b1e8af3-da39-42d0-bc3e-5be66c218bfe/extract-content/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.634855 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r2tck_9b1e8af3-da39-42d0-bc3e-5be66c218bfe/extract-content/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.680230 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r2tck_9b1e8af3-da39-42d0-bc3e-5be66c218bfe/extract-utilities/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.869921 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tz6jz_17c24550-b095-488a-b3f7-773bcdb8c773/extract-utilities/0.log" Jan 22 17:08:15 crc kubenswrapper[4704]: I0122 17:08:15.892506 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r2tck_9b1e8af3-da39-42d0-bc3e-5be66c218bfe/registry-server/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.060982 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tz6jz_17c24550-b095-488a-b3f7-773bcdb8c773/extract-utilities/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.078240 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tz6jz_17c24550-b095-488a-b3f7-773bcdb8c773/extract-content/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.143555 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tz6jz_17c24550-b095-488a-b3f7-773bcdb8c773/extract-content/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.308317 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tz6jz_17c24550-b095-488a-b3f7-773bcdb8c773/extract-utilities/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.345385 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tz6jz_17c24550-b095-488a-b3f7-773bcdb8c773/extract-content/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.648210 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vs2mz_40969928-6095-4242-80c7-a8daed2e28b1/marketplace-operator/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.684031 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rc7ct_4841bd3f-e66d-4d5b-8eef-7d7584d19c79/extract-utilities/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.792493 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tz6jz_17c24550-b095-488a-b3f7-773bcdb8c773/registry-server/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.846755 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rc7ct_4841bd3f-e66d-4d5b-8eef-7d7584d19c79/extract-utilities/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.877309 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rc7ct_4841bd3f-e66d-4d5b-8eef-7d7584d19c79/extract-content/0.log" Jan 22 17:08:16 crc kubenswrapper[4704]: I0122 17:08:16.882232 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rc7ct_4841bd3f-e66d-4d5b-8eef-7d7584d19c79/extract-content/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.077386 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rc7ct_4841bd3f-e66d-4d5b-8eef-7d7584d19c79/extract-utilities/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.088359 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rc7ct_4841bd3f-e66d-4d5b-8eef-7d7584d19c79/extract-content/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.189841 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77f4m_535a024a-4218-4bb1-86e5-f8b63f1b10c4/extract-utilities/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.213894 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rc7ct_4841bd3f-e66d-4d5b-8eef-7d7584d19c79/registry-server/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.368324 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77f4m_535a024a-4218-4bb1-86e5-f8b63f1b10c4/extract-utilities/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.379024 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77f4m_535a024a-4218-4bb1-86e5-f8b63f1b10c4/extract-content/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.402188 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77f4m_535a024a-4218-4bb1-86e5-f8b63f1b10c4/extract-content/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.555040 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77f4m_535a024a-4218-4bb1-86e5-f8b63f1b10c4/extract-utilities/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.597005 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77f4m_535a024a-4218-4bb1-86e5-f8b63f1b10c4/extract-content/0.log" Jan 22 17:08:17 crc kubenswrapper[4704]: I0122 17:08:17.638067 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:08:17 crc kubenswrapper[4704]: E0122 17:08:17.638321 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:08:18 crc kubenswrapper[4704]: I0122 17:08:18.043255 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77f4m_535a024a-4218-4bb1-86e5-f8b63f1b10c4/registry-server/0.log" Jan 22 17:08:28 crc kubenswrapper[4704]: I0122 17:08:28.634149 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:08:28 crc kubenswrapper[4704]: E0122 17:08:28.634957 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:08:31 crc kubenswrapper[4704]: I0122 17:08:31.001175 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ddc665846-kh7xc_581b3ed3-6843-4e85-8187-2718699e8964/prometheus-operator-admission-webhook/0.log" Jan 22 17:08:31 crc kubenswrapper[4704]: I0122 17:08:31.067964 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-nptfd_8e5411a6-6909-463f-9794-35459abc62ff/prometheus-operator/0.log" Jan 22 17:08:31 crc kubenswrapper[4704]: I0122 17:08:31.079817 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7ddc665846-pvlpn_c8aeac14-9541-4d77-a63a-087807303ca7/prometheus-operator-admission-webhook/0.log" Jan 22 17:08:31 crc kubenswrapper[4704]: I0122 17:08:31.202030 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4tbm7_e43a41bb-98a7-48f7-8a29-1dc807c5ad5e/operator/0.log" Jan 22 17:08:31 crc kubenswrapper[4704]: I0122 17:08:31.237653 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-nsxrl_b1514213-cdd4-4219-aefc-7d8b37aa38c4/observability-ui-dashboards/0.log" Jan 22 17:08:31 crc kubenswrapper[4704]: I0122 17:08:31.261881 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4j4ln_be3ad292-b6cf-42bc-8eee-2768c60702be/perses-operator/0.log" Jan 22 17:08:39 crc kubenswrapper[4704]: I0122 17:08:39.939886 4704 scope.go:117] "RemoveContainer" containerID="59a9b69c09b8a35d777063ddb087f9ccd0ec4f0f87142fb129e724080190592a" Jan 22 17:08:39 crc kubenswrapper[4704]: I0122 17:08:39.964587 4704 scope.go:117] "RemoveContainer" containerID="3bd28cb90b5e51d8b693d43ca0ca83e3603d3ec4676d285194b3e550c178cabe" Jan 22 17:08:40 crc kubenswrapper[4704]: I0122 17:08:40.013889 4704 scope.go:117] "RemoveContainer" containerID="e85e235192df04e7ae1ec120e064f20c67e6e63d2ef7d011aa9e6df39722b424" Jan 22 17:08:40 crc kubenswrapper[4704]: I0122 17:08:40.029582 4704 scope.go:117] "RemoveContainer" containerID="9bb4e287e9121ddfe9b035fe020627e15e58ab6cf533ce6ec9e1f98eed37c52f" Jan 22 17:08:40 crc kubenswrapper[4704]: I0122 17:08:40.079547 4704 scope.go:117] "RemoveContainer" containerID="db1b858aa60e90725a1490624e98114fcfe3bb7cadd6a0e18f5b83b1f84ac7ff" Jan 22 17:08:40 crc kubenswrapper[4704]: I0122 17:08:40.096779 4704 scope.go:117] "RemoveContainer" containerID="a2b90acfa945d7d70b151b80471df18e8b38d0be29a969ff3ec775c738f7bfc0" Jan 22 17:08:42 crc kubenswrapper[4704]: I0122 17:08:42.633690 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:08:42 crc kubenswrapper[4704]: E0122 17:08:42.634633 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:08:54 crc kubenswrapper[4704]: I0122 17:08:54.633374 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:08:54 crc kubenswrapper[4704]: E0122 17:08:54.634296 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:09:06 crc kubenswrapper[4704]: I0122 17:09:06.633950 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:09:06 crc kubenswrapper[4704]: E0122 17:09:06.634649 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:09:21 crc kubenswrapper[4704]: I0122 17:09:21.634520 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:09:21 crc kubenswrapper[4704]: E0122 17:09:21.635221 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:09:36 crc kubenswrapper[4704]: I0122 17:09:36.635806 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:09:36 crc kubenswrapper[4704]: E0122 17:09:36.636738 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:09:39 crc kubenswrapper[4704]: I0122 17:09:39.647529 4704 generic.go:334] "Generic (PLEG): container finished" podID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerID="d467fcb67c7f75fea34f97f051823b3b226c88e6b1df62c6915e3eaa94cb4a3c" exitCode=0 Jan 22 17:09:39 crc kubenswrapper[4704]: I0122 17:09:39.647625 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kw27s/must-gather-7swx5" event={"ID":"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8","Type":"ContainerDied","Data":"d467fcb67c7f75fea34f97f051823b3b226c88e6b1df62c6915e3eaa94cb4a3c"} Jan 22 17:09:39 crc kubenswrapper[4704]: I0122 17:09:39.648667 4704 scope.go:117] "RemoveContainer" containerID="d467fcb67c7f75fea34f97f051823b3b226c88e6b1df62c6915e3eaa94cb4a3c" Jan 22 17:09:39 crc kubenswrapper[4704]: I0122 17:09:39.972485 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kw27s_must-gather-7swx5_bcdc9a4b-056b-47b4-81eb-4bff9ab425b8/gather/0.log" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.217133 4704 scope.go:117] "RemoveContainer" containerID="54a4de63cf0d02595e13ff122de45eddb78e1fe379066e42dc4936578b6e057c" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.235972 4704 scope.go:117] "RemoveContainer" containerID="8709fb3024b9a4277d000a5910dd5c0a92d322187031c389dd7971660a9c4f66" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.289828 4704 scope.go:117] "RemoveContainer" containerID="8fd2fd2280010213ef609f9b7a989cc0b20bc1c6543a2e87a3a40a6fef70dd27" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.320205 4704 scope.go:117] "RemoveContainer" containerID="c30161fc5567f2be54f3d1c7f9fff57e8583dc81d1abd60cabffdf5378436b58" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.348179 4704 scope.go:117] "RemoveContainer" containerID="c5a1b127eca5de1fad7d76d248ec7fc4a1bc076edf3b010c4389fed50f0c7703" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.393414 4704 scope.go:117] "RemoveContainer" containerID="61d831c6ad4b89c33ee6606088a35ebffbb494d14bcbd2c34526959184400d23" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.409843 4704 scope.go:117] "RemoveContainer" containerID="f711238096660cbdbbe2b05cc7adedaa250749d51af6f727dac170a409ebc75d" Jan 22 17:09:40 crc kubenswrapper[4704]: I0122 17:09:40.443883 4704 scope.go:117] "RemoveContainer" containerID="adcc59c8f67189ff36ca8240aa1c33a4ac9c87ab1dfedf763a56668e9367f564" Jan 22 17:09:47 crc kubenswrapper[4704]: I0122 17:09:47.567584 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kw27s/must-gather-7swx5"] Jan 22 17:09:47 crc kubenswrapper[4704]: I0122 17:09:47.569067 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kw27s/must-gather-7swx5" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerName="copy" containerID="cri-o://d91d70b0b89bfd4bf31a41eea2154fa5c683549e7ea1b0911cdbade824bcac43" gracePeriod=2 Jan 22 17:09:47 crc kubenswrapper[4704]: I0122 17:09:47.576256 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kw27s/must-gather-7swx5"] Jan 22 17:09:47 crc kubenswrapper[4704]: I0122 17:09:47.639007 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:09:47 crc kubenswrapper[4704]: E0122 17:09:47.639362 4704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hsg8r_openshift-machine-config-operator(e8e25829-99af-4717-87f3-43a79b9d8c26)\"" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" Jan 22 17:09:47 crc kubenswrapper[4704]: I0122 17:09:47.720302 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kw27s_must-gather-7swx5_bcdc9a4b-056b-47b4-81eb-4bff9ab425b8/copy/0.log" Jan 22 17:09:47 crc kubenswrapper[4704]: I0122 17:09:47.720589 4704 generic.go:334] "Generic (PLEG): container finished" podID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerID="d91d70b0b89bfd4bf31a41eea2154fa5c683549e7ea1b0911cdbade824bcac43" exitCode=143 Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.064772 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kw27s_must-gather-7swx5_bcdc9a4b-056b-47b4-81eb-4bff9ab425b8/copy/0.log" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.065425 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.146146 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-must-gather-output\") pod \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.146562 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hsvv\" (UniqueName: \"kubernetes.io/projected/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-kube-api-access-8hsvv\") pod \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\" (UID: \"bcdc9a4b-056b-47b4-81eb-4bff9ab425b8\") " Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.152009 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-kube-api-access-8hsvv" (OuterVolumeSpecName: "kube-api-access-8hsvv") pod "bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" (UID: "bcdc9a4b-056b-47b4-81eb-4bff9ab425b8"). InnerVolumeSpecName "kube-api-access-8hsvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.248316 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hsvv\" (UniqueName: \"kubernetes.io/projected/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-kube-api-access-8hsvv\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.259336 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" (UID: "bcdc9a4b-056b-47b4-81eb-4bff9ab425b8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.349563 4704 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.733658 4704 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kw27s_must-gather-7swx5_bcdc9a4b-056b-47b4-81eb-4bff9ab425b8/copy/0.log" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.735221 4704 scope.go:117] "RemoveContainer" containerID="d91d70b0b89bfd4bf31a41eea2154fa5c683549e7ea1b0911cdbade824bcac43" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.735299 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kw27s/must-gather-7swx5" Jan 22 17:09:48 crc kubenswrapper[4704]: I0122 17:09:48.756758 4704 scope.go:117] "RemoveContainer" containerID="d467fcb67c7f75fea34f97f051823b3b226c88e6b1df62c6915e3eaa94cb4a3c" Jan 22 17:09:49 crc kubenswrapper[4704]: I0122 17:09:49.650239 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" path="/var/lib/kubelet/pods/bcdc9a4b-056b-47b4-81eb-4bff9ab425b8/volumes" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.706324 4704 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kt5xw"] Jan 22 17:09:52 crc kubenswrapper[4704]: E0122 17:09:52.707445 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerName="copy" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707467 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerName="copy" Jan 22 17:09:52 crc kubenswrapper[4704]: E0122 17:09:52.707492 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="registry-server" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707503 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="registry-server" Jan 22 17:09:52 crc kubenswrapper[4704]: E0122 17:09:52.707526 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="extract-content" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707538 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="extract-content" Jan 22 17:09:52 crc kubenswrapper[4704]: E0122 17:09:52.707559 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerName="gather" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707570 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerName="gather" Jan 22 17:09:52 crc kubenswrapper[4704]: E0122 17:09:52.707584 4704 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="extract-utilities" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707597 4704 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="extract-utilities" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707856 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerName="copy" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707882 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fba716-7f3a-4bde-b0ee-b62a75783db2" containerName="registry-server" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.707908 4704 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdc9a4b-056b-47b4-81eb-4bff9ab425b8" containerName="gather" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.709769 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.730137 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kt5xw"] Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.816589 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-catalog-content\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.816658 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qkj\" (UniqueName: \"kubernetes.io/projected/7e805837-c5a8-45d0-be06-05095c22ca98-kube-api-access-q4qkj\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.816857 4704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-utilities\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.918612 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-catalog-content\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.918686 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qkj\" (UniqueName: \"kubernetes.io/projected/7e805837-c5a8-45d0-be06-05095c22ca98-kube-api-access-q4qkj\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.918859 4704 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-utilities\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.919191 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-catalog-content\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.919279 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-utilities\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:52 crc kubenswrapper[4704]: I0122 17:09:52.939311 4704 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qkj\" (UniqueName: \"kubernetes.io/projected/7e805837-c5a8-45d0-be06-05095c22ca98-kube-api-access-q4qkj\") pod \"community-operators-kt5xw\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:53 crc kubenswrapper[4704]: I0122 17:09:53.036283 4704 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:09:53 crc kubenswrapper[4704]: I0122 17:09:53.560055 4704 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kt5xw"] Jan 22 17:09:53 crc kubenswrapper[4704]: I0122 17:09:53.776952 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerStarted","Data":"84d6bb4cd30c2e38cb9a17c9082ce5ac6485d404a2e1e1b3115e35d14174c2da"} Jan 22 17:09:53 crc kubenswrapper[4704]: I0122 17:09:53.777004 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerStarted","Data":"6f8b2bf9a54e75ad4051f82b4369dc5909a1307743794a6d8ac1ed5e8c723a16"} Jan 22 17:09:54 crc kubenswrapper[4704]: I0122 17:09:54.790077 4704 generic.go:334] "Generic (PLEG): container finished" podID="7e805837-c5a8-45d0-be06-05095c22ca98" containerID="84d6bb4cd30c2e38cb9a17c9082ce5ac6485d404a2e1e1b3115e35d14174c2da" exitCode=0 Jan 22 17:09:54 crc kubenswrapper[4704]: I0122 17:09:54.790239 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerDied","Data":"84d6bb4cd30c2e38cb9a17c9082ce5ac6485d404a2e1e1b3115e35d14174c2da"} Jan 22 17:09:54 crc kubenswrapper[4704]: I0122 17:09:54.793126 4704 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 17:09:55 crc kubenswrapper[4704]: I0122 17:09:55.799107 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerStarted","Data":"af3592e8176c4848135efc5f671c0a24b6a3d21f9245854f876028f6954f9249"} Jan 22 17:09:56 crc kubenswrapper[4704]: I0122 17:09:56.809926 4704 generic.go:334] "Generic (PLEG): container finished" podID="7e805837-c5a8-45d0-be06-05095c22ca98" containerID="af3592e8176c4848135efc5f671c0a24b6a3d21f9245854f876028f6954f9249" exitCode=0 Jan 22 17:09:56 crc kubenswrapper[4704]: I0122 17:09:56.809968 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerDied","Data":"af3592e8176c4848135efc5f671c0a24b6a3d21f9245854f876028f6954f9249"} Jan 22 17:09:57 crc kubenswrapper[4704]: I0122 17:09:57.824104 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerStarted","Data":"55af9b2643412dcaf6f14ead2bd0e1ae7604a36c2b3ab9eb5c8d9b11b6dec6f9"} Jan 22 17:09:57 crc kubenswrapper[4704]: I0122 17:09:57.852037 4704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kt5xw" podStartSLOduration=3.05105378 podStartE2EDuration="5.852015639s" podCreationTimestamp="2026-01-22 17:09:52 +0000 UTC" firstStartedPulling="2026-01-22 17:09:54.792626604 +0000 UTC m=+2487.437173344" lastFinishedPulling="2026-01-22 17:09:57.593588493 +0000 UTC m=+2490.238135203" observedRunningTime="2026-01-22 17:09:57.847984934 +0000 UTC m=+2490.492531704" watchObservedRunningTime="2026-01-22 17:09:57.852015639 +0000 UTC m=+2490.496562349" Jan 22 17:10:02 crc kubenswrapper[4704]: I0122 17:10:02.634750 4704 scope.go:117] "RemoveContainer" containerID="23c43a3587fcb4efe3d5cf4c642adda4284f788130250ad3be8172a4b38885fd" Jan 22 17:10:03 crc kubenswrapper[4704]: I0122 17:10:03.037435 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:10:03 crc kubenswrapper[4704]: I0122 17:10:03.037945 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:10:03 crc kubenswrapper[4704]: I0122 17:10:03.091640 4704 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:10:03 crc kubenswrapper[4704]: I0122 17:10:03.874819 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" event={"ID":"e8e25829-99af-4717-87f3-43a79b9d8c26","Type":"ContainerStarted","Data":"9e0cd3f2dbccc25f20aa2018949f9f36fc3a71648f49c5cc57b85790aab16597"} Jan 22 17:10:03 crc kubenswrapper[4704]: I0122 17:10:03.932021 4704 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:10:06 crc kubenswrapper[4704]: I0122 17:10:06.694630 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kt5xw"] Jan 22 17:10:06 crc kubenswrapper[4704]: I0122 17:10:06.695232 4704 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kt5xw" podUID="7e805837-c5a8-45d0-be06-05095c22ca98" containerName="registry-server" containerID="cri-o://55af9b2643412dcaf6f14ead2bd0e1ae7604a36c2b3ab9eb5c8d9b11b6dec6f9" gracePeriod=2 Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.459362 4704 generic.go:334] "Generic (PLEG): container finished" podID="7e805837-c5a8-45d0-be06-05095c22ca98" containerID="55af9b2643412dcaf6f14ead2bd0e1ae7604a36c2b3ab9eb5c8d9b11b6dec6f9" exitCode=0 Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.459420 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerDied","Data":"55af9b2643412dcaf6f14ead2bd0e1ae7604a36c2b3ab9eb5c8d9b11b6dec6f9"} Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.666739 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.753967 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qkj\" (UniqueName: \"kubernetes.io/projected/7e805837-c5a8-45d0-be06-05095c22ca98-kube-api-access-q4qkj\") pod \"7e805837-c5a8-45d0-be06-05095c22ca98\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.754154 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-catalog-content\") pod \"7e805837-c5a8-45d0-be06-05095c22ca98\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.754248 4704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-utilities\") pod \"7e805837-c5a8-45d0-be06-05095c22ca98\" (UID: \"7e805837-c5a8-45d0-be06-05095c22ca98\") " Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.756079 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-utilities" (OuterVolumeSpecName: "utilities") pod "7e805837-c5a8-45d0-be06-05095c22ca98" (UID: "7e805837-c5a8-45d0-be06-05095c22ca98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.760331 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e805837-c5a8-45d0-be06-05095c22ca98-kube-api-access-q4qkj" (OuterVolumeSpecName: "kube-api-access-q4qkj") pod "7e805837-c5a8-45d0-be06-05095c22ca98" (UID: "7e805837-c5a8-45d0-be06-05095c22ca98"). InnerVolumeSpecName "kube-api-access-q4qkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.801552 4704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e805837-c5a8-45d0-be06-05095c22ca98" (UID: "7e805837-c5a8-45d0-be06-05095c22ca98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.856322 4704 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.856465 4704 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e805837-c5a8-45d0-be06-05095c22ca98-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:07 crc kubenswrapper[4704]: I0122 17:10:07.856479 4704 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qkj\" (UniqueName: \"kubernetes.io/projected/7e805837-c5a8-45d0-be06-05095c22ca98-kube-api-access-q4qkj\") on node \"crc\" DevicePath \"\"" Jan 22 17:10:08 crc kubenswrapper[4704]: I0122 17:10:08.467550 4704 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt5xw" event={"ID":"7e805837-c5a8-45d0-be06-05095c22ca98","Type":"ContainerDied","Data":"6f8b2bf9a54e75ad4051f82b4369dc5909a1307743794a6d8ac1ed5e8c723a16"} Jan 22 17:10:08 crc kubenswrapper[4704]: I0122 17:10:08.467867 4704 scope.go:117] "RemoveContainer" containerID="55af9b2643412dcaf6f14ead2bd0e1ae7604a36c2b3ab9eb5c8d9b11b6dec6f9" Jan 22 17:10:08 crc kubenswrapper[4704]: I0122 17:10:08.467618 4704 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt5xw" Jan 22 17:10:08 crc kubenswrapper[4704]: I0122 17:10:08.497998 4704 scope.go:117] "RemoveContainer" containerID="af3592e8176c4848135efc5f671c0a24b6a3d21f9245854f876028f6954f9249" Jan 22 17:10:08 crc kubenswrapper[4704]: I0122 17:10:08.547699 4704 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kt5xw"] Jan 22 17:10:08 crc kubenswrapper[4704]: I0122 17:10:08.547954 4704 scope.go:117] "RemoveContainer" containerID="84d6bb4cd30c2e38cb9a17c9082ce5ac6485d404a2e1e1b3115e35d14174c2da" Jan 22 17:10:08 crc kubenswrapper[4704]: I0122 17:10:08.561348 4704 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kt5xw"] Jan 22 17:10:09 crc kubenswrapper[4704]: I0122 17:10:09.652651 4704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e805837-c5a8-45d0-be06-05095c22ca98" path="/var/lib/kubelet/pods/7e805837-c5a8-45d0-be06-05095c22ca98/volumes" Jan 22 17:10:40 crc kubenswrapper[4704]: I0122 17:10:40.571896 4704 scope.go:117] "RemoveContainer" containerID="83d92b05303375edc90d3fcb8d7c5fd1bbfc564fab7ef6b439f7046828709a7b" Jan 22 17:10:40 crc kubenswrapper[4704]: I0122 17:10:40.590055 4704 scope.go:117] "RemoveContainer" containerID="c8d796cdfb6d7a46a243a6a48d0b846c4ff51bc5e2c86aa71a4494b7d567cdce" Jan 22 17:10:40 crc kubenswrapper[4704]: I0122 17:10:40.623043 4704 scope.go:117] "RemoveContainer" containerID="d33b1efe1b665bf6e7f08d44e358a2502f501a0d2e4c11ecb1ee5e5b877a1132" Jan 22 17:10:40 crc kubenswrapper[4704]: I0122 17:10:40.638726 4704 scope.go:117] "RemoveContainer" containerID="80ba83e990d481a5601993ffdb8982c86b29faa1d8d286aaaa4f356a3aff3efa" Jan 22 17:10:40 crc kubenswrapper[4704]: I0122 17:10:40.657206 4704 scope.go:117] "RemoveContainer" containerID="25bb71dfef71ccbfc2c9c4a48e8240524db51f868c8fdecd00186a9a5fa2ab65" Jan 22 17:10:40 crc kubenswrapper[4704]: I0122 17:10:40.696571 4704 scope.go:117] "RemoveContainer" containerID="66f5ade15a8ec877cbffc7dad344990c4dc9c7952e4efc0c7b39474f1f6d50a6" Jan 22 17:12:19 crc kubenswrapper[4704]: I0122 17:12:19.086939 4704 patch_prober.go:28] interesting pod/machine-config-daemon-hsg8r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 17:12:19 crc kubenswrapper[4704]: I0122 17:12:19.087561 4704 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hsg8r" podUID="e8e25829-99af-4717-87f3-43a79b9d8c26" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"